WorldWideScience

Sample records for reliable natural language

  1. System reliability analysis with natural language and expert's subjectivity

    International Nuclear Information System (INIS)

    Onisawa, T.

    1996-01-01

    This paper introduces natural language expressions and expert's subjectivity to system reliability analysis. To this end, this paper defines a subjective measure of reliability and presents the method of the system reliability analysis using the measure. The subjective measure of reliability corresponds to natural language expressions of reliability estimation, which is represented by a fuzzy set defined on [0,1]. The presented method deals with the dependence among subsystems and employs parametrized operations of subjective measures of reliability which can reflect expert 's subjectivity towards the analyzed system. The analysis results are also expressed by linguistic terms. Finally this paper gives an example of the system reliability analysis by the presented method

  2. A Natural Language Architecture

    OpenAIRE

    Sodiya, Adesina Simon

    2007-01-01

    Natural languages are the latest generation of programming languages, which require processing real human natural expressions. Over the years, several groups or researchers have trying to develop widely accepted natural language languages based on artificial intelligence (AI). But no true natural language has been developed. The goal of this work is to design a natural language preprocessing architecture that identifies and accepts programming instructions or sentences in their natural forms ...

  3. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  4. Natural language modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, J.K. [Sandia National Labs., Albuquerque, NM (United States)

    1997-11-01

    This seminar describes a process and methodology that uses structured natural language to enable the construction of precise information requirements directly from users, experts, and managers. The main focus of this natural language approach is to create the precise information requirements and to do it in such a way that the business and technical experts are fully accountable for the results. These requirements can then be implemented using appropriate tools and technology. This requirement set is also a universal learning tool because it has all of the knowledge that is needed to understand a particular process (e.g., expense vouchers, project management, budget reviews, tax, laws, machine function).

  5. Symbolic Natural Language Processing

    OpenAIRE

    Laporte , Eric

    2005-01-01

    The connection between language processing and combinatorics on words is natural. Historically, linguists actually played a part in the beginning of the construction of theoretical combinatorics on words. Some of the terms in current use originate from linguistics: word, prefix, suffix, grammar, syntactic monoid... However, interpenetration between the formal world of computer theory and the intuitive world of linguistics is still a love story with ups and downs. We will encounter in this cha...

  6. Software reliability and programming language

    International Nuclear Information System (INIS)

    Ehrenberger, W.

    1983-01-01

    When discussing advantages and drawbacks of programming languages, it is sometimes suggested to use these languages also for safety-related tasks. The author states the demands to be made on programming languages for this purpose. His recommendations are based on the work of TC7 of the European Workshop on Industrial Computer Systems and WG A3 of IEC SC 45a. (orig./HP) [de

  7. Natural language understanding

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, S

    1982-04-01

    Language understanding is essential for intelligent information processing. Processing of language itself involves configuration element analysis, syntactic analysis (parsing), and semantic analysis. They are not carried out in isolation. These are described for the Japanese language and their usage in understanding-systems is examined. 30 references.

  8. Teaching natural language to computers

    OpenAIRE

    Corneli, Joseph; Corneli, Miriam

    2016-01-01

    "Natural Language," whether spoken and attended to by humans, or processed and generated by computers, requires networked structures that reflect creative processes in semantic, syntactic, phonetic, linguistic, social, emotional, and cultural modules. Being able to produce novel and useful behavior following repeated practice gets to the root of both artificial intelligence and human language. This paper investigates the modalities involved in language-like applications that computers -- and ...

  9. Handbook of Natural Language Processing

    CERN Document Server

    Indurkhya, Nitin

    2010-01-01

    Provides a comprehensive, modern reference of practical tools and techniques for implementing natural language processing in computer systems. This title covers classical methods, empirical and statistical techniques, and various applications. It describes how the techniques can be applied to European and Asian languages as well as English

  10. Advances in natural language processing.

    Science.gov (United States)

    Hirschberg, Julia; Manning, Christopher D

    2015-07-17

    Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.

  11. Natural language processing with Java

    CERN Document Server

    Reese, Richard M

    2015-01-01

    If you are a Java programmer who wants to learn about the fundamental tasks underlying natural language processing, this book is for you. You will be able to identify and use NLP tasks for many common problems, and integrate them in your applications to solve more difficult problems. Readers should be familiar/experienced with Java software development.

  12. Natural language interface for nuclear data bases

    International Nuclear Information System (INIS)

    Heger, A.S.; Koen, B.V.

    1987-01-01

    A natural language interface has been developed for access to information from a data base, simulating a nuclear plant reliability data system (NPRDS), one of the several existing data bases serving the nuclear industry. In the last decade, the importance of information has been demonstrated by the impressive diffusion of data base management systems. The present methods that are employed to access data bases fall into two main categories of menu-driven systems and use of data base manipulation languages. Both of these methods are currently used by NPRDS. These methods have proven to be tedious, however, and require extensive training by the user for effective utilization of the data base. Artificial intelligence techniques have been used in the development of several intelligent front ends for data bases in nonnuclear domains. Lunar is a natural language program for interface to a data base describing moon rock samples brought back by Apollo. Intellect is one of the first data base question-answering systems that was commercially available in the financial area. Ladder is an intelligent data base interface that was developed as a management aid to Navy decision makers. A natural language interface for nuclear data bases that can be used by nonprogrammers with little or no training provides a means for achieving this goal for this industry

  13. Empirical Methods in Natural Language Generation

    NARCIS (Netherlands)

    Krahmer, Emiel; Theune, Mariet

    Natural language generation (NLG) is a subfield of natural language processing (NLP) that is often characterized as the study of automatically converting non-linguistic representations (e.g., from databases or other knowledge sources) into coherent natural language text. In recent years the field

  14. Natural language processing: an introduction.

    Science.gov (United States)

    Nadkarni, Prakash M; Ohno-Machado, Lucila; Chapman, Wendy W

    2011-01-01

    To provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design. This tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art. We describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field.

  15. Visualizing Natural Language Descriptions: A Survey

    OpenAIRE

    Hassani, Kaveh; Lee, Won-Sook

    2016-01-01

    A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphi...

  16. Natural language processing techniques for automatic test ...

    African Journals Online (AJOL)

    Natural language processing techniques for automatic test questions generation using discourse connectives. ... PROMOTING ACCESS TO AFRICAN RESEARCH. AFRICAN JOURNALS ... Journal of Computer Science and Its Application.

  17. Knowledge representation and natural language processing

    Energy Technology Data Exchange (ETDEWEB)

    Weischedel, R.M.

    1986-07-01

    In principle, natural language and knowledge representation are closely related. This paper investigates this by demonstrating how several natural language phenomena, such as definite reference, ambiguity, ellipsis, ill-formed input, figures of speech, and vagueness, require diverse knowledge sources and reasoning. The breadth of kinds of knowledge needed to represent morphology, syntax, semantics, and pragmatics is surveyed. Furthermore, several current issues in knowledge representation, such as logic versus semantic nets, general-purpose versus special-purpose reasoners, adequacy of first-order logic, wait-and-see strategies, and default reasoning, are illustrated in terms of their relation to natural language processing and how natural language impact the issues.

  18. Mobile speech and advanced natural language solutions

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Mobile Speech and Advanced Natural Language Solutions provides a comprehensive and forward-looking treatment of natural speech in the mobile environment. This fourteen-chapter anthology brings together lead scientists from Apple, Google, IBM, AT&T, Yahoo! Research and other companies, along with academicians, technology developers and market analysts.  They analyze the growing markets for mobile speech, new methodological approaches to the study of natural language, empirical research findings on natural language and mobility, and future trends in mobile speech.  Mobile Speech opens with a challenge to the industry to broaden the discussion about speech in mobile environments beyond the smartphone, to consider natural language applications across different domains.   Among the new natural language methods introduced in this book are Sequence Package Analysis, which locates and extracts valuable opinion-related data buried in online postings; microintonation as a way to make TTS truly human-like; and se...

  19. Generating natural language under pragmatic constraints

    CERN Document Server

    Hovy, Eduard H

    2013-01-01

    Recognizing that the generation of natural language is a goal- driven process, where many of the goals are pragmatic (i.e., interpersonal and situational) in nature, this book provides an overview of the role of pragmatics in language generation. Each chapter states a problem that arises in generation, develops a pragmatics-based solution, and then describes how the solution is implemented in PAULINE, a language generator that can produce numerous versions of a single underlying message, depending on its setting.

  20. Reliability evaluation of a natural circulation system

    International Nuclear Information System (INIS)

    Jafari, Jalil; D'Auria, Francesco; Kazeminejad, Hossein; Davilu, Hadi

    2003-01-01

    This paper discusses a reliability study performed with reference to a passive thermohydraulic natural circulation (NC) system, named TTL-1. A methodology based on probabilistic techniques has been applied with the main purpose to optimize the system design. The obtained results have been adopted to estimate the thermal-hydraulic reliability (TH-R) of the same system. A total of 29 relevant parameters (including nominal values and plausible ranges of variations) affecting the design and the NC performance of the TTL-1 loop are identified and a probability of occurrence is assigned for each value based on expert judgment. Following procedures established for the uncertainty evaluation of thermal-hydraulic system codes results, 137 system configurations have been selected and each configuration has been analyzed via the Relap5 best-estimate code. The reference system configuration and the failure criteria derived from the 'mission' of the passive system are adopted for the evaluation of the system TH-R. Four different definitions of a less-than-unity 'reliability-values' (where unity represents the maximum achievable reliability) are proposed for the performance of the selected passive system. This is normally considered fully reliable, i.e. reliability-value equal one, in typical Probabilistic Safety Assessment (PSA) applications in nuclear reactor safety. The two 'point' TH-R values for the considered NC system were found equal to 0.70 and 0.85, i.e. values comparable with the reliability of a pump installed in an 'equivalent' forced circulation (active) system having the same 'mission'. The design optimization study was completed by a regression analysis addressing the output of the 137 calculations: heat losses, undetected leakage, loop length, riser diameter, and equivalent diameter of the test section have been found as the most important parameters bringing to the optimal system design and affecting the TH-R. As added values for this work, the comparison has

  1. A System for Natural Language Sentence Generation.

    Science.gov (United States)

    Levison, Michael; Lessard, Gregory

    1992-01-01

    Describes the natural language computer program, "Vinci." Explains that using an attribute grammar formalism, Vinci can simulate components of several current linguistic theories. Considers the design of the system and its applications in linguistic modelling and second language acquisition research. Notes Vinci's uses in linguistics…

  2. Natural Language Generation from Pictographs

    OpenAIRE

    Sevens, Leen; Vandeghinste, Vincent; Schuurman, Ineke; Van Eynde, Frank

    2015-01-01

    We present a Pictograph-to-Text translation system for people with Intellectual or Developmental Disabilities (IDD). The system translates pictograph messages, consisting of one or more pictographs, into Dutch text using WordNet links and an n-gram language model. We also provide several pictograph input methods assisting the users in selecting the appropriate pictographs.

  3. Natural Language Description of Emotion

    Science.gov (United States)

    Kazemzadeh, Abe

    2013-01-01

    This dissertation studies how people describe emotions with language and how computers can simulate this descriptive behavior. Although many non-human animals can express their current emotions as social signals, only humans can communicate about emotions symbolically. This symbolic communication of emotion allows us to talk about emotions that we…

  4. Bayesian natural language semantics and pragmatics

    CERN Document Server

    Zeevat, Henk

    2015-01-01

    The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice's contributions to pragmatics or in interpretation by abduction.

  5. Arabic Natural Language Processing System Code Library

    Science.gov (United States)

    2014-06-01

    Adelphi, MD 20783-1197 This technical note provides a brief description of a Java library for Arabic natural language processing ( NLP ) containing code...for training and applying the Arabic NLP system described in the paper "A Cross-Task Flexible Transition Model for Arabic Tokenization, Affix...and also English) natural language processing ( NLP ), containing code for training and applying the Arabic NLP system described in Stephen Tratz’s

  6. Evolution, brain, and the nature of language.

    Science.gov (United States)

    Berwick, Robert C; Friederici, Angela D; Chomsky, Noam; Bolhuis, Johan J

    2013-02-01

    Language serves as a cornerstone for human cognition, yet much about its evolution remains puzzling. Recent research on this question parallels Darwin's attempt to explain both the unity of all species and their diversity. What has emerged from this research is that the unified nature of human language arises from a shared, species-specific computational ability. This ability has identifiable correlates in the brain and has remained fixed since the origin of language approximately 100 thousand years ago. Although songbirds share with humans a vocal imitation learning ability, with a similar underlying neural organization, language is uniquely human. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Thought beyond language: neural dissociation of algebra and natural language.

    Science.gov (United States)

    Monti, Martin M; Parsons, Lawrence M; Osherson, Daniel N

    2012-08-01

    A central question in cognitive science is whether natural language provides combinatorial operations that are essential to diverse domains of thought. In the study reported here, we addressed this issue by examining the role of linguistic mechanisms in forging the hierarchical structures of algebra. In a 3-T functional MRI experiment, we showed that processing of the syntax-like operations of algebra does not rely on the neural mechanisms of natural language. Our findings indicate that processing the syntax of language elicits the known substrate of linguistic competence, whereas algebraic operations recruit bilateral parietal brain regions previously implicated in the representation of magnitude. This double dissociation argues against the view that language provides the structure of thought across all cognitive domains.

  8. A Natural Logic for Natural-Language Knowledge Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Styltsvig, Henrik Bulskov; Jensen, Per Anker

    2017-01-01

    We describe a natural logic for computational reasoning with a regimented fragment of natural language. The natural logic comes with intuitive inference rules enabling deductions and with an internal graph representation facilitating conceptual path finding between pairs of terms as an approach t...

  9. A Natural Logic for Natural-language Knowledge Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Jensen, Per Anker

    2017-01-01

    We describe a natural logic for computational reasoning with a regimented fragment of natural language. The natural logic comes with intuitive inference rules enabling deductions and with an internal graph representation facilitating conceptual path finding between pairs of terms as an approach t...

  10. Prediction During Natural Language Comprehension.

    Science.gov (United States)

    Willems, Roel M; Frank, Stefan L; Nijhof, Annabel D; Hagoort, Peter; van den Bosch, Antal

    2016-06-01

    The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investigated the neural basis of 2 distinct aspects of word prediction, derived from information theory, during story comprehension. We assessed the effect of entropy of next-word probability distributions as well as surprisal A computational model determined entropy and surprisal for each word in 3 literary stories. Twenty-four healthy participants listened to the same 3 stories while their brain activation was measured using fMRI. Reversed speech fragments were presented as a control condition. Brain areas sensitive to entropy were left ventral premotor cortex, left middle frontal gyrus, right inferior frontal gyrus, left inferior parietal lobule, and left supplementary motor area. Areas sensitive to surprisal were left inferior temporal sulcus ("visual word form area"), bilateral superior temporal gyrus, right amygdala, bilateral anterior temporal poles, and right inferior frontal sulcus. We conclude that prediction during language comprehension can occur at several levels of processing, including at the level of word form. Our study exemplifies the power of combining computational linguistics with cognitive neuroscience, and additionally underlines the feasibility of studying continuous spoken language materials with fMRI. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Natural language generation of surgical procedures.

    Science.gov (United States)

    Wagner, J C; Rogers, J E; Baud, R H; Scherrer, J R

    1999-01-01

    A number of compositional Medical Concept Representation systems are being developed. Although these provide for a detailed conceptual representation of the underlying information, they have to be translated back to natural language for used by end-users and applications. The GALEN programme has been developing one such representation and we report here on a tool developed to generate natural language phrases from the GALEN conceptual representations. This tool can be adapted to different source modelling schemes and to different destination languages or sublanguages of a domain. It is based on a multilingual approach to natural language generation, realised through a clean separation of the domain model from the linguistic model and their link by well defined structures. Specific knowledge structures and operations have been developed for bridging between the modelling 'style' of the conceptual representation and natural language. Using the example of the scheme developed for modelling surgical operative procedures within the GALEN-IN-USE project, we show how the generator is adapted to such a scheme. The basic characteristics of the surgical procedures scheme are presented together with the basic principles of the generation tool. Using worked examples, we discuss the transformation operations which change the initial source representation into a form which can more directly be translated to a given natural language. In particular, the linguistic knowledge which has to be introduced--such as definitions of concepts and relationships is described. We explain the overall generator strategy and how particular transformation operations are triggered by language-dependent and conceptual parameters. Results are shown for generated French phrases corresponding to surgical procedures from the urology domain.

  12. Semantic structures advances in natural language processing

    CERN Document Server

    Waltz, David L

    2014-01-01

    Natural language understanding is central to the goals of artificial intelligence. Any truly intelligent machine must be capable of carrying on a conversation: dialogue, particularly clarification dialogue, is essential if we are to avoid disasters caused by the misunderstanding of the intelligent interactive systems of the future. This book is an interim report on the grand enterprise of devising a machine that can use natural language as fluently as a human. What has really been achieved since this goal was first formulated in Turing's famous test? What obstacles still need to be overcome?

  13. Theoretical approaches to natural language understanding

    Energy Technology Data Exchange (ETDEWEB)

    1985-01-01

    This book discusses the following: Computational Linguistics, Artificial Intelligence, Linguistics, Philosophy, and Cognitive Science and the current state of natural language understanding. Three topics form the focus for discussion; these topics include aspects of grammars, aspects of semantics/pragmatics, and knowledge representation.

  14. The nature of pragmatic language impairment

    NARCIS (Netherlands)

    Ketelaars, M.P.

    2010-01-01

    The present dissertation reports on research into the nature of Pragmatic Language Impairment (PLI) in children aged 4 to 7 in the Netherlands. First, the possibility of screening for PLI in the general population is examined. Results show that this is indeed possible as well as feasible. Second, an

  15. Natural Language Generation for dialogue: system survey

    NARCIS (Netherlands)

    Theune, Mariet

    Many natural language dialogue systems make use of `canned text' for output generation. This approach may be su±cient for dialogues in restricted domains where system utterances are short and simple and use fixed expressions (e.g., slot filling dialogues in the ticket reservation or travel

  16. Natural Language Navigation Support in Virtual Reality

    NARCIS (Netherlands)

    van Luin, J.; Nijholt, Antinus; op den Akker, Hendrikus J.A.; Giagourta, V.; Strintzis, M.G.

    2001-01-01

    We describe our work on designing a natural language accessible navigation agent for a virtual reality (VR) environment. The agent is part of an agent framework, which means that it can communicate with other agents. Its navigation task consists of guiding the visitors in the environment and to

  17. Brain readiness and the nature of language.

    Science.gov (United States)

    Bouchard, Denis

    2015-01-01

    To identify the neural components that make a brain ready for language, it is important to have well defined linguistic phenotypes, to know precisely what language is. There are two central features to language: the capacity to form signs (words), and the capacity to combine them into complex structures. We must determine how the human brain enables these capacities. A sign is a link between a perceptual form and a conceptual meaning. Acoustic elements and content elements, are already brain-internal in non-human animals, but as categorical systems linked with brain-external elements. Being indexically tied to objects of the world, they cannot freely link to form signs. A crucial property of a language-ready brain is the capacity to process perceptual forms and contents offline, detached from any brain-external phenomena, so their "representations" may be linked into signs. These brain systems appear to have pleiotropic effects on a variety of phenotypic traits and not to be specifically designed for language. Syntax combines signs, so the combination of two signs operates simultaneously on their meaning and form. The operation combining the meanings long antedates its function in language: the primitive mode of predication operative in representing some information about an object. The combination of the forms is enabled by the capacity of the brain to segment vocal and visual information into discrete elements. Discrete temporal units have order and juxtaposition, and vocal units have intonation, length, and stress. These are primitive combinatorial processes. So the prior properties of the physical and conceptual elements of the sign introduce combinatoriality into the linguistic system, and from these primitive combinatorial systems derive concatenation in phonology and combination in morphosyntax. Given the nature of language, a key feature to our understanding of the language-ready brain is to be found in the mechanisms in human brains that enable the unique

  18. Brain readiness and the nature of language

    Directory of Open Access Journals (Sweden)

    Denis eBouchard

    2015-09-01

    Full Text Available To identify the neural components that make a brain ready for language, it is important to have well defined linguistic phenotypes, to know precisely what language is. There are two central features to language: the capacity to form signs (words, and the capacity to combine them into complex structures. We must determine how the human brain enables these capacities.A sign is a link between a perceptual form and a conceptual meaning. Acoustic elements and content elements, are already brain-internal in non-human animals, but as categorical systems linked with brain-external elements. Being indexically tied to objects of the world, they cannot freely link to form signs. A crucial property of a language-ready brain is the capacity to process perceptual forms and contents offline, detached from any brain-external phenomena, so their representations may be linked into signs. These brain systems appear to have pleiotropic effects on a variety of phenotypic traits and not to be specifically designed for language.Syntax combines signs, so the combination of two signs operates simultaneously on their meaning and form. The operation combining the meanings long antedates its function in language: the primitive mode of predication operative in representing some information about an object. The combination of the forms is enabled by the capacity of the brain to segment vocal and visual information into discrete elements. Discrete temporal units have order and juxtaposition, and vocal units have intonation, length, and stress. These are primitive combinatorial processes. So the prior properties of the physical and conceptual elements of the sign introduce combinatoriality into the linguistic system, and from these primitive combinatorial systems derive concatenation in phonology and combination in morphosyntax.Given the nature of language, a key feature to our understanding of the language-ready brain is to be found in the mechanisms in human brains that

  19. Task planning systems with natural language interface

    International Nuclear Information System (INIS)

    Kambayashi, Shaw; Uenaka, Junji

    1989-12-01

    In this report, a natural language analyzer and two different task planning systems are described. In 1988, we have introduced a Japanese language analyzer named CS-PARSER for the input interface of the task planning system in the Human Acts Simulation Program (HASP). For the purpose of a high speed analysis, we have modified a dictionary system of the CS-PARSER by using C language description. It is found that the new dictionary system is very useful for a high speed analysis and an efficient maintenance of the dictionary. For the study of the task planning problem, we have modified a story generating system named Micro TALE-SPIN to generate a story written in Japanese sentences. We have also constructed a planning system with natural language interface by using the CS-PARSER. Task planning processes and related knowledge bases of these systems are explained. A concept design for a new task planning system will be also discussed from evaluations of above mentioned systems. (author)

  20. Natural language processing tools for computer assisted language learning

    Directory of Open Access Journals (Sweden)

    Vandeventer Faltin, Anne

    2003-01-01

    Full Text Available This paper illustrates the usefulness of natural language processing (NLP tools for computer assisted language learning (CALL through the presentation of three NLP tools integrated within a CALL software for French. These tools are (i a sentence structure viewer; (ii an error diagnosis system; and (iii a conjugation tool. The sentence structure viewer helps language learners grasp the structure of a sentence, by providing lexical and grammatical information. This information is derived from a deep syntactic analysis. Two different outputs are presented. The error diagnosis system is composed of a spell checker, a grammar checker, and a coherence checker. The spell checker makes use of alpha-codes, phonological reinterpretation, and some ad hoc rules to provide correction proposals. The grammar checker employs constraint relaxation and phonological reinterpretation as diagnosis techniques. The coherence checker compares the underlying "semantic" structures of a stored answer and of the learners' input to detect semantic discrepancies. The conjugation tool is a resource with enhanced capabilities when put on an electronic format, enabling searches from inflected and ambiguous verb forms.

  1. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    Science.gov (United States)

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  2. Natural language generation in health care.

    Science.gov (United States)

    Cawsey, A J; Webber, B L; Jones, R B

    1997-01-01

    Good communication is vital in health care, both among health care professionals, and between health care professionals and their patients. And well-written documents, describing and/or explaining the information in structured databases may be easier to comprehend, more edifying, and even more convincing than the structured data, even when presented in tabular or graphic form. Documents may be automatically generated from structured data, using techniques from the field of natural language generation. These techniques are concerned with how the content, organization and language used in a document can be dynamically selected, depending on the audience and context. They have been used to generate health education materials, explanations and critiques in decision support systems, and medical reports and progress notes.

  3. The social impact of natural language processing

    DEFF Research Database (Denmark)

    Hovy, Dirk; Spruit, Shannon

    Research in natural language processing (NLP) used to be mostly performed on anonymous corpora, with the goal of enriching linguistic analysis. Authors were either largely unknown or public figures. As we increasingly use more data from social media, this situation has changed: users are now...... individually identifiable, and the outcome of NLP experiments and applications can have a direct effect on their lives. This change should spawn a debate about the ethical implications of NLP, but until now, the internal discourse in the field has not followed the technological development. This position paper...

  4. Natural language processing and advanced information management

    Science.gov (United States)

    Hoard, James E.

    1989-01-01

    Integrating diverse information sources and application software in a principled and general manner will require a very capable advanced information management (AIM) system. In particular, such a system will need a comprehensive addressing scheme to locate the material in its docuverse. It will also need a natural language processing (NLP) system of great sophistication. It seems that the NLP system must serve three functions. First, it provides an natural language interface (NLI) for the users. Second, it serves as the core component that understands and makes use of the real-world interpretations (RWIs) contained in the docuverse. Third, it enables the reasoning specialists (RSs) to arrive at conclusions that can be transformed into procedures that will satisfy the users' requests. The best candidate for an intelligent agent that can satisfactorily make use of RSs and transform documents (TDs) appears to be an object oriented data base (OODB). OODBs have, apparently, an inherent capacity to use the large numbers of RSs and TDs that will be required by an AIM system and an inherent capacity to use them in an effective way.

  5. An Overview of Computer-Based Natural Language Processing.

    Science.gov (United States)

    Gevarter, William B.

    Computer-based Natural Language Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using natural languages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…

  6. On the Relationship between a Computational Natural Logic and Natural Language

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Nilsson, Jørgen Fischer

    2016-01-01

    This paper makes a case for adopting appropriate forms of natural logic as target language for computational reasoning with descriptive natural language. Natural logics are stylized fragments of natural language where reasoning can be conducted directly by natural reasoning rules reflecting intui...... intuitive reasoning in natural language. The approach taken in this paper is to extend natural logic stepwise with a view to covering successively larger parts of natural language. We envisage applications for computational querying and reasoning, in particular within the life-sciences....

  7. Understanding and representing natural language meaning

    Science.gov (United States)

    Waltz, D. L.; Maran, L. R.; Dorfman, M. H.; Dinitz, R.; Farwell, D.

    1982-12-01

    During this contract period the authors have: (1) continued investigation of events and actions by means of representation schemes called 'event shape diagrams'; (2) written a parsing program which selects appropriate word and sentence meanings by a parallel process know as activation and inhibition; (3) begun investigation of the point of a story or event by modeling the motivations and emotional behaviors of story characters; (4) started work on combining and translating two machine-readable dictionaries into a lexicon and knowledge base which will form an integral part of our natural language understanding programs; (5) made substantial progress toward a general model for the representation of cognitive relations by comparing English scene and event descriptions with similar descriptions in other languages; (6) constructed a general model for the representation of tense and aspect of verbs; (7) made progress toward the design of an integrated robotics system which accepts English requests, and uses visual and tactile inputs in making decisions and learning new tasks.

  8. Mathematical Formula Search using Natural Language Queries

    Directory of Open Access Journals (Sweden)

    YANG, S.

    2014-11-01

    Full Text Available This paper presents how to search mathematical formulae written in MathML when given plain words as a query. Since the proposed method allows natural language queries like the traditional Information Retrieval for the mathematical formula search, users do not need to enter any complicated math symbols and to use any formula input tool. For this, formula data is converted into plain texts, and features are extracted from the converted texts. In our experiments, we achieve an outstanding performance, a MRR of 0.659. In addition, we introduce how to utilize formula classification for formula search. By using class information, we finally achieve an improved performance, a MRR of 0.690.

  9. The social impact of natural language processing

    DEFF Research Database (Denmark)

    Hovy, Dirk; Spruit, Shannon

    Research in natural language processing (NLP) used to be mostly performed on anonymous corpora, with the goal of enriching linguistic analysis. Authors were either largely unknown or public figures. As we increasingly use more data from social media, this situation has changed: users are now...... individually identifiable, and the outcome of NLP experiments and applications can have a direct effect on their lives. This change should spawn a debate about the ethical implications of NLP, but until now, the internal discourse in the field has not followed the technological development. This position paper...... identifies a number of social implications that NLP research may have, and discusses their ethical significance, as well as ways to address them....

  10. Quantum Algorithms for Compositional Natural Language Processing

    Directory of Open Access Journals (Sweden)

    William Zeng

    2016-08-01

    Full Text Available We propose a new application of quantum computing to the field of natural language processing. Ongoing work in this field attempts to incorporate grammatical structure into algorithms that compute meaning. In (Coecke, Sadrzadeh and Clark, 2010, the authors introduce such a model (the CSC model based on tensor product composition. While this algorithm has many advantages, its implementation is hampered by the large classical computational resources that it requires. In this work we show how computational shortcomings of the CSC approach could be resolved using quantum computation (possibly in addition to existing techniques for dimension reduction. We address the value of quantum RAM (Giovannetti,2008 for this model and extend an algorithm from Wiebe, Braun and Lloyd (2012 into a quantum algorithm to categorize sentences in CSC. Our new algorithm demonstrates a quadratic speedup over classical methods under certain conditions.

  11. A Tableau Prover for Natural Logic and Language

    NARCIS (Netherlands)

    Abzianidze, Lasha

    2015-01-01

    Modeling the entailment relation over sentences is one of the generic problems of natural language understanding. In order to account for this problem, we design a theorem prover for Natural Logic, a logic whose terms resemble natural language expressions. The prover is based on an analytic tableau

  12. Capturing and Modeling Domain Knowledge Using Natural Language Processing Techniques

    National Research Council Canada - National Science Library

    Auger, Alain

    2005-01-01

    .... Initiated in 2004 at Defense Research and Development Canada (DRDC), the SACOT knowledge engineering research project is currently investigating, developing and validating innovative natural language processing (NLP...

  13. Temporal reliability and lateralization of the resting-state language network.

    Science.gov (United States)

    Zhu, Linlin; Fan, Yang; Zou, Qihong; Wang, Jue; Gao, Jia-Hong; Niu, Zhendong

    2014-01-01

    The neural processing loop of language is complex but highly associated with Broca's and Wernicke's areas. The left dominance of these two areas was the earliest observation of brain asymmetry. It was demonstrated that the language network and its functional asymmetry during resting state were reproducible across institutions. However, the temporal reliability of resting-state language network and its functional asymmetry are still short of knowledge. In this study, we established a seed-based resting-state functional connectivity analysis of language network with seed regions located at Broca's and Wernicke's areas, and investigated temporal reliability of language network and its functional asymmetry. The language network was found to be temporally reliable in both short- and long-term. In the aspect of functional asymmetry, the Broca's area was found to be left lateralized, while the Wernicke's area is mainly right lateralized. Functional asymmetry of these two areas revealed high short- and long-term reliability as well. In addition, the impact of global signal regression (GSR) on reliability of the resting-state language network was investigated, and our results demonstrated that GSR had negligible effect on the temporal reliability of the resting-state language network. Our study provided methodology basis for future cross-culture and clinical researches of resting-state language network and suggested priority of adopting seed-based functional connectivity for its high reliability.

  14. Temporal reliability and lateralization of the resting-state language network.

    Directory of Open Access Journals (Sweden)

    Linlin Zhu

    Full Text Available The neural processing loop of language is complex but highly associated with Broca's and Wernicke's areas. The left dominance of these two areas was the earliest observation of brain asymmetry. It was demonstrated that the language network and its functional asymmetry during resting state were reproducible across institutions. However, the temporal reliability of resting-state language network and its functional asymmetry are still short of knowledge. In this study, we established a seed-based resting-state functional connectivity analysis of language network with seed regions located at Broca's and Wernicke's areas, and investigated temporal reliability of language network and its functional asymmetry. The language network was found to be temporally reliable in both short- and long-term. In the aspect of functional asymmetry, the Broca's area was found to be left lateralized, while the Wernicke's area is mainly right lateralized. Functional asymmetry of these two areas revealed high short- and long-term reliability as well. In addition, the impact of global signal regression (GSR on reliability of the resting-state language network was investigated, and our results demonstrated that GSR had negligible effect on the temporal reliability of the resting-state language network. Our study provided methodology basis for future cross-culture and clinical researches of resting-state language network and suggested priority of adopting seed-based functional connectivity for its high reliability.

  15. Temporal Reliability and Lateralization of the Resting-State Language Network

    Science.gov (United States)

    Zou, Qihong; Wang, Jue; Gao, Jia-Hong; Niu, Zhendong

    2014-01-01

    The neural processing loop of language is complex but highly associated with Broca's and Wernicke's areas. The left dominance of these two areas was the earliest observation of brain asymmetry. It was demonstrated that the language network and its functional asymmetry during resting state were reproducible across institutions. However, the temporal reliability of resting-state language network and its functional asymmetry are still short of knowledge. In this study, we established a seed-based resting-state functional connectivity analysis of language network with seed regions located at Broca's and Wernicke's areas, and investigated temporal reliability of language network and its functional asymmetry. The language network was found to be temporally reliable in both short- and long-term. In the aspect of functional asymmetry, the Broca's area was found to be left lateralized, while the Wernicke's area is mainly right lateralized. Functional asymmetry of these two areas revealed high short- and long-term reliability as well. In addition, the impact of global signal regression (GSR) on reliability of the resting-state language network was investigated, and our results demonstrated that GSR had negligible effect on the temporal reliability of the resting-state language network. Our study provided methodology basis for future cross-culture and clinical researches of resting-state language network and suggested priority of adopting seed-based functional connectivity for its high reliability. PMID:24475058

  16. Reliability and competitiveness of Canadian natural gas supply - discussion paper

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    A summary of market evolution for the Canadian natural gas industry was provided. Canada's undisputed position as an important supplier of natural gas to domestic and United States consumers was reaffirmed. The industry has marketable potential of 582 trillion cubic feet of conventional natural gas, of which 254 trillion cubic feet is found in the Western Canada Sedimentary Basin. The role of the Free Trade Agreement of 1988, and the recent deregulation of the Canadian natural gas industry in allowing the gas market to evolve into a competitive, continental market were noted. The end result to consumers is a choice of suppliers, competitive prices, reliability and confidence. 7 refs., 2 tabs., 8 figs

  17. Natural language solution to a Tuff problem

    International Nuclear Information System (INIS)

    Langkopf, B.S.; Mallory, L.H.

    1984-01-01

    A scientific data base, the Tuff Data Base, is being created at Sandia National Laboratories on the Cyber 170/855, using System 2000. It is being developed for use by scientists and engineers investigating the feasibility of locating a high-level radioactive waste repository in tuff (a type of volcanic rock) at Yucca Mountain on and adjacent to the Nevada Test Site. This project, the Nevada Nuclear Waste Storage Investigations (NNWSI) Project, is managed by the Nevada Operations Office of the US Department of Energy. A user-friendly interface, PRIMER, was developed that uses the Self-Contained Facility (SCF) command SUBMIT and System 2000 Natural Language functions and parametric strings that are schema resident. The interface was designed to: (1) allow users, with or without computer experience or keyboard skill, to sporadically access data in the Tuff Data Base; (2) produce retrieval capabilities for the user quickly; and (3) acquaint the users with the data in the Tuff Data Base. This paper gives a brief description of the Tuff Data Base Schema and the interface, PRIMER, which is written in Fortran V. 3 figures

  18. Policy-Based Management Natural Language Parser

    Science.gov (United States)

    James, Mark

    2009-01-01

    The Policy-Based Management Natural Language Parser (PBEM) is a rules-based approach to enterprise management that can be used to automate certain management tasks. This parser simplifies the management of a given endeavor by establishing policies to deal with situations that are likely to occur. Policies are operating rules that can be referred to as a means of maintaining order, security, consistency, or other ways of successfully furthering a goal or mission. PBEM provides a way of managing configuration of network elements, applications, and processes via a set of high-level rules or business policies rather than managing individual elements, thus switching the control to a higher level. This software allows unique management rules (or commands) to be specified and applied to a cross-section of the Global Information Grid (GIG). This software embodies a parser that is capable of recognizing and understanding conversational English. Because all possible dialect variants cannot be anticipated, a unique capability was developed that parses passed on conversation intent rather than the exact way the words are used. This software can increase productivity by enabling a user to converse with the system in conversational English to define network policies. PBEM can be used in both manned and unmanned science-gathering programs. Because policy statements can be domain-independent, this software can be applied equally to a wide variety of applications.

  19. Natural language metaphors covertly influence reasoning.

    Directory of Open Access Journals (Sweden)

    Paul H Thibodeau

    Full Text Available Metaphors pervade discussions of social issues like climate change, the economy, and crime. We ask how natural language metaphors shape the way people reason about such social issues. In previous work, we showed that describing crime metaphorically as a beast or a virus, led people to generate different solutions to a city's crime problem. In the current series of studies, instead of asking people to generate a solution on their own, we provided them with a selection of possible solutions and asked them to choose the best ones. We found that metaphors influenced people's reasoning even when they had a set of options available to compare and select among. These findings suggest that metaphors can influence not just what solution comes to mind first, but also which solution people think is best, even when given the opportunity to explicitly compare alternatives. Further, we tested whether participants were aware of the metaphor. We found that very few participants thought the metaphor played an important part in their decision. Further, participants who had no explicit memory of the metaphor were just as much affected by the metaphor as participants who were able to remember the metaphorical frame. These findings suggest that metaphors can act covertly in reasoning. Finally, we examined the role of political affiliation on reasoning about crime. The results confirm our previous findings that Republicans are more likely to generate enforcement and punishment solutions for dealing with crime, and are less swayed by metaphor than are Democrats or Independents.

  20. Cognitive Neuroscience of Natural Language Use

    NARCIS (Netherlands)

    Willems, R.M.

    2015-01-01

    When we think of everyday language use, the first things that come to mind include colloquial conversations, reading and writing e-mails, sending text messages or reading a book. But can we study the brain basis of language as we use it in our daily lives? As a topic of study, the cognitive

  1. Bibliography of Research in Natural Language Generation

    Science.gov (United States)

    1993-11-01

    593], pages International Conference of the IEEE Engineer- 351-363. ing in Medicine and Biology Society, volume 3, pages 1347-1348, New Orleans, LA...Conference on Machine Translation of Languages and Applied [1218] Ingrid Zukerman. Koalas are not bears: Gener- Language Analysis. pages 66-80. Her

  2. Do neural nets learn statistical laws behind natural language?

    Directory of Open Access Journals (Sweden)

    Shuntaro Takahashi

    Full Text Available The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.

  3. Generating and Executing Complex Natural Language Queries across Linked Data.

    Science.gov (United States)

    Hamon, Thierry; Mougin, Fleur; Grabar, Natalia

    2015-01-01

    With the recent and intensive research in the biomedical area, the knowledge accumulated is disseminated through various knowledge bases. Links between these knowledge bases are needed in order to use them jointly. Linked Data, SPARQL language, and interfaces in Natural Language question-answering provide interesting solutions for querying such knowledge bases. We propose a method for translating natural language questions in SPARQL queries. We use Natural Language Processing tools, semantic resources, and the RDF triples description. The method is designed on 50 questions over 3 biomedical knowledge bases, and evaluated on 27 questions. It achieves 0.78 F-measure on the test set. The method for translating natural language questions into SPARQL queries is implemented as Perl module available at http://search.cpan.org/ thhamon/RDF-NLP-SPARQLQuery.

  4. Natural language computing an English generative grammar in Prolog

    CERN Document Server

    Dougherty, Ray C

    2013-01-01

    This book's main goal is to show readers how to use the linguistic theory of Noam Chomsky, called Universal Grammar, to represent English, French, and German on a computer using the Prolog computer language. In so doing, it presents a follow-the-dots approach to natural language processing, linguistic theory, artificial intelligence, and expert systems. The basic idea is to introduce meaningful answers to significant problems involved in representing human language data on a computer. The book offers a hands-on approach to anyone who wishes to gain a perspective on natural language

  5. The reliability of language performance measurement in language sample analysis of children aged 5-6 years

    Directory of Open Access Journals (Sweden)

    Zahra Soleymani

    2014-04-01

    Full Text Available Background and Aim: The language sample analysis (LSA is more common in other languages than Persian to study language development and assess language pathology. We studied some psychometric properties of language sample analysis in this research such as content validity of written story and its pictures, test-retest reliability, and inter-rater reliability.Methods: We wrote a story based on Persian culture from Schneider’s study. The validity of written story and drawn pictures was approved by experts. To study test-retest reliability, 30 children looked at the pictures and told their own story twice with 7-10 days interval. Children generated the story themselves and tester did not give any cue about the story. Their audio-taped story was transcribed and analyzed. Sentence and word structures were detected in the analysis.Results: Mean of experts' agreement with the validity of written story was 92.28 percent. Experts scored the quality of pictures high and excellent. There was correlation between variables in sentence and word structure (p<0.05 in test-retest, except complex sentences (p=0.137. The agreement rate was 97.1 percent in inter-rater reliability assessment of transcription. The results of inter-rater reliability of language analysis showed that correlation coefficients were significant.Conclusion: The results confirmed that the tool was valid for eliciting language sample. The consistency of language performance in repeated measurement varied from mild to high in language sample analysis approach.

  6. Concepts and implementations of natural language query systems

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Liu, I-Hsiung

    1984-01-01

    The currently developed user language interfaces of information systems are generally intended for serious users. These interfaces commonly ignore potentially the largest user group, i.e., casual users. This project discusses the concepts and implementations of a natural query language system which satisfy the nature and information needs of casual users by allowing them to communicate with the system in the form of their native (natural) language. In addition, a framework for the development of such an interface is also introduced for the MADAM (Multics Approach to Data Access and Management) system at the University of Southwestern Louisiana.

  7. UNLization of Punjabi text for natural language processing ...

    Indian Academy of Sciences (India)

    Vaibhav Agarwal

    2018-05-26

    May 26, 2018 ... resent, and store information in a natural-language-inde- pendent format [8]. UNL is .... account semantic information available in words of the problem ...... Sentiment Analysis (SA) plays a vital role in decision making process.

  8. Study on evaluation of construction reliability for engineering project based on fuzzy language operator

    Science.gov (United States)

    Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping

    2018-03-01

    System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.

  9. Finite-State Methodology in Natural Language Processing

    Directory of Open Access Journals (Sweden)

    Michal Korzycki

    2001-01-01

    Full Text Available Recent mathematical and algorithmic results in the field of finite-state technology, as well the increase in computing power, have constructed the base for a new approach in natural language processing. However the task of creating an appropriate model that would describe the phenomena of the natural language is still to be achieved. ln this paper I'm presenting some notions related to the finite-state modelling of syntax and morphology.

  10. The Islamic State Battle Plan: Press Release Natural Language Processing

    Science.gov (United States)

    2016-06-01

    Institute for the Study of Violent Groups NATO North Atlantic Treaty Organization NLP Natural Language Processing PCorpus Permanent Corpus PDF...approaches, we apply Natural Language Processing ( NLP ) tools to a unique database of text documents collected by Whiteside (2014). His collection...from Arabic to English. Compared to other terrorism databases, Whiteside’s collection methodology limits the scope of the database and avoids coding

  11. The Arabic Natural Language Processing: Introduction and Challenges

    Directory of Open Access Journals (Sweden)

    Boukhatem Nadera

    2014-09-01

    Full Text Available Arabic is a Semitic language spoken by more than 330 million people as a native language, in an area extending from the Arabian/Persian Gulf in the East to the Atlantic Ocean in the West. Moreover, it is the language in which 1.4 billion Muslims around the world perform their daily prayers. Over the last few years, Arabic natural language processing (ANLP has gained increasing importance, and several state of the art systems have been developed for a wide range of applications.

  12. Validity, Reliability and Standardization Study of the Language Assessment Test for Aphasia

    Directory of Open Access Journals (Sweden)

    Bülent Toğram

    2012-09-01

    Full Text Available OBJECTIVE: Aphasia assessment is the first step towards a well- founded language therapy. Language tests need to consider cultural as well as typological linguistic aspects of a given language. This study was designed to determine the standardization, validity and reliability of Language Assessment Test for Aphasia, which consists of eight subtests including spontaneous speech and language, auditory comprehension, repetition, naming, reading, grammar, speech acts, and writing. METHODS: The test was administered to 282 healthy participants and 92 aphasic participants in age, education and gender matched groups. The validity study of the test was investigated with analysis of content, structure and criterion-related validity. For reliability of the test, the analysis of internal consistency, stability and equivalence reliability was conducted. The influence of variables on healhty participants’ sub-test scores, test score and language score was examined. According to significant differences, norms and cut-off scores based on language score were determined. RESULTS: The group with aphasia performed highly lower than healthy participants on subtest, test and language scores. The test scores of healthy group were mostly affected by age and educational level but not affected by gender. According to significant differences, age and educational level for both groups were determined. Considering age and educational levels, the reference values for the cut-off scores were presented. CONCLUSION: The test was found to be a highly reliable and valid aphasia test for Turkish- speaking aphasic patients either in Turkey or other Turkish communities around the world

  13. Natural language processing in psychiatry. Artificial intelligence technology and psychopathology.

    Science.gov (United States)

    Garfield, D A; Rapp, C; Evens, M

    1992-04-01

    The potential benefit of artificial intelligence (AI) technology as a tool of psychiatry has not been well defined. In this essay, the technology of natural language processing and its position with regard to the two main schools of AI is clearly outlined. Past experiments utilizing AI techniques in understanding psychopathology are reviewed. Natural language processing can automate the analysis of transcripts and can be used in modeling theories of language comprehension. In these ways, it can serve as a tool in testing psychological theories of psychopathology and can be used as an effective tool in empirical research on verbal behavior in psychopathology.

  14. Validity and reliability of Preschool Language Scale 4 for measuring language development in children 48-59 months of age

    Directory of Open Access Journals (Sweden)

    Nuryani Sidarta

    2016-04-01

    Full Text Available Prevalence rates for speech and language delay have been reported across wide ranges. Speech and language delay affects 5% to 8% of preschool children, often persisting into the school years.  A cross-sectional study was conducted in 208 children aged 48-59 months to determine the validity and reliability of the Indonesian edition of the Preschool Language Scale version 4 (PLS4 as a screening tool for the identification of language development disorders. Construct validity was examined by using Pearson correlation coefficient. Internal consistency was tested and repeated measurements were taken to establish the stability coefficient and intraclass correlation coefficients (ICC for test-retest reliability. For construct validity, the Pearson correlation coefficient ranged from 0.151-0.526, indicating that all questions in this instrument were valid for measuring auditory comprehension (AC and expressive communication skills (EC. Cronbach’s alpha level ranged from 0.81-0.95 with standard error of measurement (SEM ranging from 3.1-3.3. Stability coefficients ranged from 0.98-.0.99 with ICC coefficient ranging from 0.97-0.99 both of which showed an excellent reliability. This study found that PLS-4 is a valid and reliable instrument. It is easy to handle and can be recommended for assessing language development in children aged 48-59 months.

  15. Naturalizing language: human appraisal and (quasi) technology

    DEFF Research Database (Denmark)

    Cowley, Stephen

    2013-01-01

    Using contemporary science, the paper builds on Wittgenstein’s views of human language. Rather than ascribing reality to inscription-like entities, it links embodiment with distributed cognition. The verbal or (quasi) technological aspect of language is traced to not action, but human specific...... interactivity. This species-specific form of sense-making sustains, among other things, using texts, making/construing phonetic gestures and thinking. Human action is thus grounded in appraisals or sense-saturated coordination. To illustrate interactivity at work, the paper focuses on a case study. Over 11 s......, a crime scene investigator infers that she is probably dealing with an inside job: she uses not words, but intelligent gaze. This connects professional expertise to circumstances and the feeling of thinking. It is suggested that, as for other species, human appraisal is based in synergies. However, since...

  16. AMSTERDAM-NIJMEGEN EVERYDAY LANGUAGE TEST - CONSTRUCTION, RELIABILITY AND VALIDITY

    NARCIS (Netherlands)

    BLOMERT, L; KEAN, ML; KOSTER, C; SCHOKKER, J

    1994-01-01

    The Amsterdam-Nijmegen Everyday Language Test (ANELT) is designed to measure, first, the level of verbal communicative abilities of aphasic patients and, second, changes in these abilities over time. The level of communicative effectiveness is determined by the adequacy of bringing a message across.

  17. The Language Teaching Methods Scale: Reliability and Validity Studies

    Science.gov (United States)

    Okmen, Burcu; Kilic, Abdurrahman

    2016-01-01

    The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…

  18. An overview of computer-based natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1983-01-01

    Computer based Natural Language Processing (NLP) is the key to enabling humans and their computer based creations to interact with machines in natural language (like English, Japanese, German, etc., in contrast to formal computer languages). The doors that such an achievement can open have made this a major research area in Artificial Intelligence and Computational Linguistics. Commercial natural language interfaces to computers have recently entered the market and future looks bright for other applications as well. This report reviews the basic approaches to such systems, the techniques utilized, applications, the state of the art of the technology, issues and research requirements, the major participants and finally, future trends and expectations. It is anticipated that this report will prove useful to engineering and research managers, potential users, and others who will be affected by this field as it unfolds.

  19. A Meta-Analysis of Reliability Coefficients in Second Language Research

    Science.gov (United States)

    Plonsky, Luke; Derrick, Deirdre J.

    2016-01-01

    Ensuring internal validity in quantitative research requires, among other conditions, reliable instrumentation. Unfortunately, however, second language (L2) researchers often fail to report and even more often fail to interpret reliability estimates beyond generic benchmarks for acceptability. As a means to guide interpretations of such estimates,…

  20. Handbook of natural language processing and machine translation DARPA global autonomous language exploitation

    CERN Document Server

    Olive, Joseph P; McCary, John

    2011-01-01

    This comprehensive handbook, written by leading experts in the field, details the groundbreaking research conducted under the breakthrough GALE program - The Global Autonomous Language Exploitation within the Defense Advanced Research Projects Agency (DARPA), while placing it in the context of previous research in the fields of natural language and signal processing, artificial intelligence and machine translation. The most fundamental contrast between GALE and its predecessor programs was its holistic integration of previously separate or sequential processes. In earlier language research pro

  1. Statistical Language Models and Information Retrieval: Natural Language Processing Really Meets Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; de Jong, Franciska M.G.

    2001-01-01

    Traditionally, natural language processing techniques for information retrieval have always been studied outside the framework of formal models of information retrieval. In this article, we introduce a new formal model of information retrieval based on the application of statistical language models.

  2. ROPE: Recoverable Order-Preserving Embedding of Natural Language

    Energy Technology Data Exchange (ETDEWEB)

    Widemann, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wang, Eric X. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thiagarajan, Jayaraman J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-11

    We present a novel Recoverable Order-Preserving Embedding (ROPE) of natural language. ROPE maps natural language passages from sparse concatenated one-hot representations to distributed vector representations of predetermined fixed length. We use Euclidean distance to return search results that are both grammatically and semantically similar. ROPE is based on a series of random projections of distributed word embeddings. We show that our technique typically forms a dictionary with sufficient incoherence such that sparse recovery of the original text is possible. We then show how our embedding allows for efficient and meaningful natural search and retrieval on Microsoft’s COCO dataset and the IMDB Movie Review dataset.

  3. The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages

    Directory of Open Access Journals (Sweden)

    Shigeru eMiyagawa

    2014-06-01

    Full Text Available How human language arose is a mystery in the evolution of Homo sapiens. Miyagawa, Berwick, & Okanoya (Frontiers 2013 put forward a proposal, which we will call the Integration Hypothesis of human language evolution, which holds that human language is composed of two components, E for expressive, and L for lexical. Each component has an antecedent in nature: E as found, for example, in birdsong, and L in, for example, the alarm calls of monkeys. E and L integrated uniquely in humans to give rise to language. A challenge to the Integration Hypothesis is that while these non-human systems are finite-state in nature, human language is known to require characterization by a non-finite state grammar. Our claim is that E and L, taken separately, are finite-state; when a grammatical process crosses the boundary between E and L, it gives rise to the non-finite state character of human language. We provide empirical evidence for the Integration Hypothesis by showing that certain processes found in contemporary languages that have been characterized as non-finite state in nature can in fact be shown to be finite-state. We also speculate on how human language actually arose in evolution through the lens of the Integration Hypothesis.

  4. Clinical Natural Language Processing in languages other than English: opportunities and challenges.

    Science.gov (United States)

    Névéol, Aurélie; Dalianis, Hercules; Velupillai, Sumithra; Savova, Guergana; Zweigenbaum, Pierre

    2018-03-30

    Natural language processing applied to clinical text or aimed at a clinical outcome has been thriving in recent years. This paper offers the first broad overview of clinical Natural Language Processing (NLP) for languages other than English. Recent studies are summarized to offer insights and outline opportunities in this area. We envision three groups of intended readers: (1) NLP researchers leveraging experience gained in other languages, (2) NLP researchers faced with establishing clinical text processing in a language other than English, and (3) clinical informatics researchers and practitioners looking for resources in their languages in order to apply NLP techniques and tools to clinical practice and/or investigation. We review work in clinical NLP in languages other than English. We classify these studies into three groups: (i) studies describing the development of new NLP systems or components de novo, (ii) studies describing the adaptation of NLP architectures developed for English to another language, and (iii) studies focusing on a particular clinical application. We show the advantages and drawbacks of each method, and highlight the appropriate application context. Finally, we identify major challenges and opportunities that will affect the impact of NLP on clinical practice and public health studies in a context that encompasses English as well as other languages.

  5. Learning to Understand Natural Language with Less Human Effort

    Science.gov (United States)

    2015-05-01

    Supervision Distant supervision is a recent trend in information extraction. Distantly-supervised extractors are trained using a corpus of unlabeled text...consists of fill-in-the-blank natural language questions such as “Incan emperor ” or “Cunningham directed Auchtre’s second music video .” These questions...with an 132 unknown knowledge base, simultaneously learning how to semantically parse language and pop - ulate the knowledge base. The weakly

  6. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  7. Natural-language processing applied to an ITS interface

    OpenAIRE

    Antonio Gisolfi; Enrico Fischetti

    1994-01-01

    The aim of this paper is to show that with a subset of a natural language, simple systems running on PCs can be developed that can nevertheless be an effective tool for interfacing purposes in the building of an Intelligent Tutoring System (ITS). After presenting the special characteristics of the Smalltalk/V language, which provides an appropriate environment for the development of an interface, the overall architecture of the interface module is discussed. We then show how sentences are par...

  8. Natural language processing and the Now-or-Never bottleneck.

    Science.gov (United States)

    Gómez-Rodríguez, Carlos

    2016-01-01

    Researchers, motivated by the need to improve the efficiency of natural language processing tools to handle web-scale data, have recently arrived at models that remarkably match the expected features of human language processing under the Now-or-Never bottleneck framework. This provides additional support for said framework and highlights the research potential in the interaction between applied computational linguistics and cognitive science.

  9. Potential for natural evaporation as a reliable renewable energy resource.

    Science.gov (United States)

    Cavusoglu, Ahmet-Hamdi; Chen, Xi; Gentine, Pierre; Sahin, Ozgur

    2017-09-26

    About 50% of the solar energy absorbed at the Earth's surface drives evaporation, fueling the water cycle that affects various renewable energy resources, such as wind and hydropower. Recent advances demonstrate our nascent ability to convert evaporation energy into work, yet there is little understanding about the potential of this resource. Here we study the energy available from natural evaporation to predict the potential of this ubiquitous resource. We find that natural evaporation from open water surfaces could provide power densities comparable to current wind and solar technologies while cutting evaporative water losses by nearly half. We estimate up to 325 GW of power is potentially available in the United States. Strikingly, water's large heat capacity is sufficient to control power output by storing excess energy when demand is low, thus reducing intermittency and improving reliability. Our findings motivate the improvement of materials and devices that convert energy from evaporation.The evaporation of water represents an alternative source of renewable energy. Building on previous models of evaporation, Cavusoglu et al. show that the power available from this natural resource is comparable to wind and solar power, yet it does not suffer as much from varying weather conditions.

  10. Learning to rank for information retrieval and natural language processing

    CERN Document Server

    Li, Hang

    2014-01-01

    Learning to rank refers to machine learning techniques for training a model in a ranking task. Learning to rank is useful for many applications in information retrieval, natural language processing, and data mining. Intensive studies have been conducted on its problems recently, and significant progress has been made. This lecture gives an introduction to the area including the fundamental problems, major approaches, theories, applications, and future work.The author begins by showing that various ranking problems in information retrieval and natural language processing can be formalized as tw

  11. Second Language Aquisition and The Development through Nature-Nurture

    Directory of Open Access Journals (Sweden)

    Syahfitri Purnama

    2017-10-01

    Full Text Available There are some factors regarding which aspect of second language acquisition is affected by individual learner factors, age, learning style. aptitude, motivation, and personality. This research is about English language acquisition of fourth-year child by nature and nurture. The child acquired her second language acquisition at home and also in one of the courses in Jakarta. She schooled by her parents in order to be able to speak English well as a target language for her future time. The purpose of this paper is to see and examine individual learner difference especially in using English as a second language. This study is a library research and retrieved data collected, recorded, transcribed, and analyzed descriptively. The results can be concluded: the child is able to communicate well and also able to construct simple sentences, complex sentences, sentence statement, phrase questions, and explain something when her teacher asks her at school. She is able to communicate by making a simple sentence or compound sentence in well-form (two clauses or three clauses, even though she still not focus to use the past tense form and sometimes she forgets to put bound morpheme -s in third person singular but she can use turn-taking in her utterances. It is a very long process since the child does the second language acquisition. The family and teacher should participate and assist the child, the proven child can learn the first and the second language at the same time.

  12. Applications of Natural Language Processing in Biodiversity Science

    Directory of Open Access Journals (Sweden)

    Anne E. Thessen

    2012-01-01

    A computer can handle the volume but cannot make sense of the language. This paper reviews and discusses the use of natural language processing (NLP and machine-learning algorithms to extract information from systematic literature. NLP algorithms have been used for decades, but require special development for application in the biological realm due to the special nature of the language. Many tools exist for biological information extraction (cellular processes, taxonomic names, and morphological characters, but none have been applied life wide and most still require testing and development. Progress has been made in developing algorithms for automated annotation of taxonomic text, identification of taxonomic names in text, and extraction of morphological character information from taxonomic descriptions. This manuscript will briefly discuss the key steps in applying information extraction tools to enhance biodiversity science.

  13. Learning from a Computer Tutor with Natural Language Capabilities

    Science.gov (United States)

    Michael, Joel; Rovick, Allen; Glass, Michael; Zhou, Yujian; Evens, Martha

    2003-01-01

    CIRCSIM-Tutor is a computer tutor designed to carry out a natural language dialogue with a medical student. Its domain is the baroreceptor reflex, the part of the cardiovascular system that is responsible for maintaining a constant blood pressure. CIRCSIM-Tutor's interaction with students is modeled after the tutoring behavior of two experienced…

  14. CITE NLM: Natural-Language Searching in an Online Catalog.

    Science.gov (United States)

    Doszkocs, Tamas E.

    1983-01-01

    The National Library of Medicine's Current Information Transfer in English public access online catalog offers unique subject search capabilities--natural-language query input, automatic medical subject headings display, closest match search strategy, ranked document output, dynamic end user feedback for search refinement. References, description…

  15. Computing an Ontological Semantics for a Natural Language Fragment

    DEFF Research Database (Denmark)

    Szymczak, Bartlomiej Antoni

    tried to establish a domain independent “ontological semantics” for relevant fragments of natural language. The purpose of this research is to develop methods and systems for taking advantage of formal ontologies for the purpose of extracting the meaning contents of texts. This functionality...

  16. Orwell's 1984: Natural Language Searching and the Contemporary Metaphor.

    Science.gov (United States)

    Dadlez, Eva M.

    1984-01-01

    Describes a natural language searching strategy for retrieving current material which has bearing on George Orwell's "1984," and identifies four main themes (technology, authoritarianism, press and psychological/linguistic implications of surveillance, political oppression) which have emerged from cross-database searches of the "Big…

  17. Recurrent Artificial Neural Networks and Finite State Natural Language Processing.

    Science.gov (United States)

    Moisl, Hermann

    It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for natural language processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…

  18. Paired structures in logical and semiotic models of natural language

    DEFF Research Database (Denmark)

    Rodríguez, J. Tinguaro; Franco, Camilo; Montero, Javier

    2014-01-01

    The evidence coming from cognitive psychology and linguistics shows that pairs of reference concepts (as e.g. good/bad, tall/short, nice/ugly, etc.) play a crucial role in the way we everyday use and understand natural languages in order to analyze reality and make decisions. Different situations...

  19. Ontology Based Queries - Investigating a Natural Language Interface

    NARCIS (Netherlands)

    van der Sluis, Ielka; Hielkema, F.; Mellish, C.; Doherty, G.

    2010-01-01

    In this paper we look at what may be learned from a comparative study examining non-technical users with a background in social science browsing and querying metadata. Four query tasks were carried out with a natural language interface and with an interface that uses a web paradigm with hyperlinks.

  20. A quick aphasia battery for efficient, reliable, and multidimensional assessment of language function.

    Science.gov (United States)

    Wilson, Stephen M; Eriksson, Dana K; Schneck, Sarah M; Lucanie, Jillian M

    2018-01-01

    This paper describes a quick aphasia battery (QAB) that aims to provide a reliable and multidimensional assessment of language function in about a quarter of an hour, bridging the gap between comprehensive batteries that are time-consuming to administer, and rapid screening instruments that provide limited detail regarding individual profiles of deficits. The QAB is made up of eight subtests, each comprising sets of items that probe different language domains, vary in difficulty, and are scored with a graded system to maximize the informativeness of each item. From the eight subtests, eight summary measures are derived, which constitute a multidimensional profile of language function, quantifying strengths and weaknesses across core language domains. The QAB was administered to 28 individuals with acute stroke and aphasia, 25 individuals with acute stroke but no aphasia, 16 individuals with chronic post-stroke aphasia, and 14 healthy controls. The patients with chronic post-stroke aphasia were tested 3 times each and scored independently by 2 raters to establish test-retest and inter-rater reliability. The Western Aphasia Battery (WAB) was also administered to these patients to assess concurrent validity. We found that all QAB summary measures were sensitive to aphasic deficits in the two groups with aphasia. All measures showed good or excellent test-retest reliability (overall summary measure: intraclass correlation coefficient (ICC) = 0.98), and excellent inter-rater reliability (overall summary measure: ICC = 0.99). Sensitivity and specificity for diagnosis of aphasia (relative to clinical impression) were 0.91 and 0.95 respectively. All QAB measures were highly correlated with corresponding WAB measures where available. Individual patients showed distinct profiles of spared and impaired function across different language domains. In sum, the QAB efficiently and reliably characterized individual profiles of language deficits.

  1. An adaptive semantic matching paradigm for reliable and valid language mapping in individuals with aphasia.

    Science.gov (United States)

    Wilson, Stephen M; Yen, Melodie; Eriksson, Dana K

    2018-04-17

    Research on neuroplasticity in recovery from aphasia depends on the ability to identify language areas of the brain in individuals with aphasia. However, tasks commonly used to engage language processing in people with aphasia, such as narrative comprehension and picture naming, are limited in terms of reliability (test-retest reproducibility) and validity (identification of language regions, and not other regions). On the other hand, paradigms such as semantic decision that are effective in identifying language regions in people without aphasia can be prohibitively challenging for people with aphasia. This paper describes a new semantic matching paradigm that uses an adaptive staircase procedure to present individuals with stimuli that are challenging yet within their competence, so that language processing can be fully engaged in people with and without language impairments. The feasibility, reliability and validity of the adaptive semantic matching paradigm were investigated in sixteen individuals with chronic post-stroke aphasia and fourteen neurologically normal participants, in comparison to narrative comprehension and picture naming paradigms. All participants succeeded in learning and performing the semantic paradigm. Test-retest reproducibility of the semantic paradigm in people with aphasia was good (Dice coefficient = 0.66), and was superior to the other two paradigms. The semantic paradigm revealed known features of typical language organization (lateralization; frontal and temporal regions) more consistently in neurologically normal individuals than the other two paradigms, constituting evidence for validity. In sum, the adaptive semantic matching paradigm is a feasible, reliable and valid method for mapping language regions in people with aphasia. © 2018 Wiley Periodicals, Inc.

  2. The reliability of a severity rating scale to measure stuttering in an unfamiliar language.

    Science.gov (United States)

    Hoffman, Laura; Wilson, Linda; Copley, Anna; Hewat, Sally; Lim, Valerie

    2014-06-01

    With increasing multiculturalism, speech-language pathologists (SLPs) are likely to work with stuttering clients from linguistic backgrounds that differ from their own. No research to date has estimated SLPs' reliability when measuring severity of stuttering in an unfamiliar language. Therefore, this study was undertaken to estimate the reliability of SLPs' use of a 9-point severity rating (SR) scale, to measure severity of stuttering in a language that was different from their own. Twenty-six Australian SLPs rated 20 speech samples (10 Australian English [AE] and 10 Mandarin) of adults who stutter using a 9-point SR scale on two separate occasions. Judges showed poor agreement when using the scale to measure stuttering in Mandarin samples. Results also indicated that 50% of individual judges were unable to reliably measure the severity of stuttering in AE. The results highlight the need for (a) SLPs to develop intra- and inter-judge agreement when using the 9-point SR scale to measure severity of stuttering in their native language (in this case AE) and in unfamiliar languages; and (b) research into the development and evaluation of practice and/or training packages to assist SLPs to do so.

  3. Adaptation of Internet Addiction Scale in Azerbaijani Language: A Validity-Reliability and Prevalence Study

    Science.gov (United States)

    Kerimova, Melek; Gunuc, Selim

    2016-01-01

    The purpose of the present paper was to adapt Gunuc and Kayri's (2010) "Internet Addiction Scale," with show validity and reliability for many various sampling groups, into the Azerbaijani language. Another objective of the study is to determine the prevalence of Internet addiction among Azerbaijani adolescents and youth, which…

  4. Developing Formal Correctness Properties from Natural Language Requirements

    Science.gov (United States)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  5. Natural language retrieval in nuclear safety information system

    International Nuclear Information System (INIS)

    Komata, Masaoki; Oosawa, Yasuo; Ujita, Hiroshi

    1983-01-01

    A natural language retrieval program NATLANG is developed to assist in the retrieval of information from event-and-cause descriptions in Licensee Event Reports (LER). The characteristics of NATLANG are (1) the use of base forms of words to retrieve related forms altered by the addition of prefixes or suffixes or changes in inflection, (2) direct access and short time retrieval with an alphabet pointer, (3) effective determination of the items and entries for a Hitachi event classification in a two step retrieval scheme, and (4) Japanese character output with the PL-1 language. NATLANG output reduces the effort needed to re-classify licensee events in the Hitachi event classification. (author)

  6. Reliability and validity of a Swedish language version of the Resilience Scale.

    Science.gov (United States)

    Nygren, Björn; Randström, Kerstin Björkman; Lejonklou, Anna K; Lundman, Beril

    2004-01-01

    The purpose of this study was to test the reliability and validity of the Swedish language version of the Resilience Scale (RS). Participants were 142 adults between 19-85 years of age. Internal consistency reliability, stability over time, and construct validity were evaluated using Cronbach's alpha, principal components analysis with varimax rotation and correlations with scores on the Sense of Coherence Scale (SOC) and the Rosenberg Self-Esteem Scale (RSE). The mean score on the RS was 142 (SD = 15). The possible scores on the RS range from 25 to 175, and scores higher than 146 are considered high. The test-retest correlation was .78. Correlations with the SOC and the RSE were .41 (p Self and Life emerged as components from the principal components analysis. These findings provide evidence for the reliability and validity of the Swedish language version of the RS.

  7. Managing Fieldwork Data with Toolbox and the Natural Language Toolkit

    Directory of Open Access Journals (Sweden)

    Stuart Robinson

    2007-06-01

    Full Text Available This paper shows how fieldwork data can be managed using the program Toolbox together with the Natural Language Toolkit (NLTK for the Python programming language. It provides background information about Toolbox and describes how it can be downloaded and installed. The basic functionality of the program for lexicons and texts is described, and its strengths and weaknesses are reviewed. Its underlying data format is briefly discussed, and Toolbox processing capabilities of NLTK are introduced, showing ways in which it can be used to extend the functionality of Toolbox. This is illustrated with a few simple scripts that demonstrate basic data management tasks relevant to language documentation, such as printing out the contents of a lexicon as HTML.

  8. Combining Natural Language Processing and Statistical Text Mining: A Study of Specialized versus Common Languages

    Science.gov (United States)

    Jarman, Jay

    2011-01-01

    This dissertation focuses on developing and evaluating hybrid approaches for analyzing free-form text in the medical domain. This research draws on natural language processing (NLP) techniques that are used to parse and extract concepts based on a controlled vocabulary. Once important concepts are extracted, additional machine learning algorithms,…

  9. Using natural language processing techniques to inform research on nanotechnology

    Directory of Open Access Journals (Sweden)

    Nastassja A. Lewinski

    2015-07-01

    Full Text Available Literature in the field of nanotechnology is exponentially increasing with more and more engineered nanomaterials being created, characterized, and tested for performance and safety. With the deluge of published data, there is a need for natural language processing approaches to semi-automate the cataloguing of engineered nanomaterials and their associated physico-chemical properties, performance, exposure scenarios, and biological effects. In this paper, we review the different informatics methods that have been applied to patent mining, nanomaterial/device characterization, nanomedicine, and environmental risk assessment. Nine natural language processing (NLP-based tools were identified: NanoPort, NanoMapper, TechPerceptor, a Text Mining Framework, a Nanodevice Analyzer, a Clinical Trial Document Classifier, Nanotoxicity Searcher, NanoSifter, and NEIMiner. We conclude with recommendations for sharing NLP-related tools through online repositories to broaden participation in nanoinformatics.

  10. Using of Natural Language Processing Techniques in Suicide Research

    Directory of Open Access Journals (Sweden)

    Azam Orooji

    2017-09-01

    Full Text Available It is estimated that each year many people, most of whom are teenagers and young adults die by suicide worldwide. Suicide receives special attention with many countries developing national strategies for prevention. Since, more medical information is available in text, Preventing the growing trend of suicide in communities requires analyzing various textual resources, such as patient records, information on the web or questionnaires. For this purpose, this study systematically reviews recent studies related to the use of natural language processing techniques in the area of people’s health who have completed suicide or are at risk. After electronically searching for the PubMed and ScienceDirect databases and studying articles by two reviewers, 21 articles matched the inclusion criteria. This study revealed that, if a suitable data set is available, natural language processing techniques are well suited for various types of suicide related research.

  11. Exploiting Lexical Regularities in Designing Natural Language Systems.

    Science.gov (United States)

    1988-04-01

    ELEMENT. PROJECT. TASKN Artificial Inteligence Laboratory A1A4WR NTumet 0) 545 Technology Square Cambridge, MA 02139 Ln *t- CONTROLLING OFFICE NAME AND...RO-RI95 922 EXPLOITING LEXICAL REGULARITIES IN DESIGNING NATURAL 1/1 LANGUAGE SYSTENS(U) MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE...oes.ary and ftdou.Ip hr Nl wow" L,2This paper presents the lexical component of the START Question Answering system developed at the MIT Artificial

  12. Automatic Requirements Specification Extraction from Natural Language (ARSENAL)

    Science.gov (United States)

    2014-10-01

    studies: the Time-Triggered Ethernet (TTEthernet) communication platform used in space, and FAA-Isolette infant incubators used in NICU . We...in space, and FAA-Isolette infant incubators used in Neonatal Intensive Care Units ( NICUs ). We systematically evalu- ated various aspects of ARSENAL...effect, we present the ARSENAL methodology. ARSENAL uses state-of-the-art advances in natural language processing (NLP) and formal methods (FM) to

  13. The sentence verification task: a reliable fMRI protocol for mapping receptive language in individual subjects

    International Nuclear Information System (INIS)

    Sanjuan, Ana; Avila, Cesar; Forn, Cristina; Ventura-Campos, Noelia; Rodriguez-Pujadas, Aina; Garcia-Porcar, Maria; Belloch, Vicente; Villanueva, Vicente

    2010-01-01

    To test the capacity of a sentence verification (SV) task to reliably activate receptive language areas. Presurgical evaluation of language is useful in predicting postsurgical deficits in patients who are candidates for neurosurgery. Productive language tasks have been successfully elaborated, but more conflicting results have been found in receptive language mapping. Twenty-two right-handed healthy controls made true-false semantic judgements of brief sentences presented auditorily. Group maps showed reliable functional activations in the frontal and temporoparietal language areas. At the individual level, the SV task showed activation located in receptive language areas in 100% of the participants with strong left-sided distributions (mean lateralisation index of 69.27). The SV task can be considered a useful tool in evaluating receptive language function in individual subjects. This study is a first step towards designing the fMRI task which may serve to presurgically map receptive language functions. (orig.)

  14. The sentence verification task: a reliable fMRI protocol for mapping receptive language in individual subjects

    Energy Technology Data Exchange (ETDEWEB)

    Sanjuan, Ana; Avila, Cesar [Universitat Jaume I, Departamento de Psicologia Basica, Clinica y Psicobiologia, Castellon de la Plana (Spain); Hospital La Fe, Unidad de Epilepsia, Servicio de Neurologia, Valencia (Spain); Forn, Cristina; Ventura-Campos, Noelia; Rodriguez-Pujadas, Aina; Garcia-Porcar, Maria [Universitat Jaume I, Departamento de Psicologia Basica, Clinica y Psicobiologia, Castellon de la Plana (Spain); Belloch, Vicente [Hospital La Fe, Eresa, Servicio de Radiologia, Valencia (Spain); Villanueva, Vicente [Hospital La Fe, Unidad de Epilepsia, Servicio de Neurologia, Valencia (Spain)

    2010-10-15

    To test the capacity of a sentence verification (SV) task to reliably activate receptive language areas. Presurgical evaluation of language is useful in predicting postsurgical deficits in patients who are candidates for neurosurgery. Productive language tasks have been successfully elaborated, but more conflicting results have been found in receptive language mapping. Twenty-two right-handed healthy controls made true-false semantic judgements of brief sentences presented auditorily. Group maps showed reliable functional activations in the frontal and temporoparietal language areas. At the individual level, the SV task showed activation located in receptive language areas in 100% of the participants with strong left-sided distributions (mean lateralisation index of 69.27). The SV task can be considered a useful tool in evaluating receptive language function in individual subjects. This study is a first step towards designing the fMRI task which may serve to presurgically map receptive language functions. (orig.)

  15. Knowledge modelling and reliability processing: presentation of the Figaro language and associated tools

    International Nuclear Information System (INIS)

    Bouissou, M.; Villatte, N.; Bouhadana, H.; Bannelier, M.

    1991-12-01

    EDF has been developing for several years an integrated set of knowledge-based and algorithmic tools for automation of reliability assessment of complex (especially sequential) systems. In this environment, the reliability expert has at his disposal all the powerful software tools for qualitative and quantitative processing, besides he gets various means to generate automatically the inputs for these tools, through the acquisition of graphical data. The development of these tools has been based on FIGARO, a specific language, which was built to get an homogeneous system modelling. Various compilers and interpreters get a FIGARO model into conventional models, such as fault-trees, Markov chains, Petri Networks. In this report, we introduce the main basics of FIGARO language, illustrating them with examples

  16. Content validation: clarity/relevance, reliability and internal consistency of enunciative signs of language acquisition.

    Science.gov (United States)

    Crestani, Anelise Henrich; Moraes, Anaelena Bragança de; Souza, Ana Paula Ramos de

    2017-08-10

    To analyze the results of the validation of building enunciative signs of language acquisition for children aged 3 to 12 months. The signs were built based on mechanisms of language acquisition in an enunciative perspective and on clinical experience with language disorders. The signs were submitted to judgment of clarity and relevance by a sample of six experts, doctors in linguistic in with knowledge of psycholinguistics and language clinic. In the validation of reliability, two judges/evaluators helped to implement the instruments in videos of 20% of the total sample of mother-infant dyads using the inter-evaluator method. The method known as internal consistency was applied to the total sample, which consisted of 94 mother-infant dyads to the contents of the Phase 1 (3-6 months) and 61 mother-infant dyads to the contents of Phase 2 (7 to 12 months). The data were collected through the analysis of mother-infant interaction based on filming of dyads and application of the parameters to be validated according to the child's age. Data were organized in a spreadsheet and then converted to computer applications for statistical analysis. The judgments of clarity/relevance indicated no modifications to be made in the instruments. The reliability test showed an almost perfect agreement between judges (0.8 ≤ Kappa ≥ 1.0); only the item 2 of Phase 1 showed substantial agreement (0.6 ≤ Kappa ≥ 0.79). The internal consistency for Phase 1 had alpha = 0.84, and Phase 2, alpha = 0.74. This demonstrates the reliability of the instruments. The results suggest adequacy as to content validity of the instruments created for both age groups, demonstrating the relevance of the content of enunciative signs of language acquisition.

  17. Reliability of an Automated High-Resolution Manometry Analysis Program across Expert Users, Novice Users, and Speech-Language Pathologists

    Science.gov (United States)

    Jones, Corinne A.; Hoffman, Matthew R.; Geng, Zhixian; Abdelhalim, Suzan M.; Jiang, Jack J.; McCulloch, Timothy M.

    2014-01-01

    Purpose: The purpose of this study was to investigate inter- and intrarater reliability among expert users, novice users, and speech-language pathologists with a semiautomated high-resolution manometry analysis program. We hypothesized that all users would have high intrarater reliability and high interrater reliability. Method: Three expert…

  18. The Impact Analysis of Psychological Reliability of Population Pilot Study For Selection of Particular Reliable Multi-Choice Item Test in Foreign Language Research Work

    Directory of Open Access Journals (Sweden)

    Seyed Hossein Fazeli

    2010-10-01

    Full Text Available The purpose of research described in the current study is the psychological reliability, its’ importance, application, and more to investigate on the impact analysis of psychological reliability of population pilot study for selection of particular reliable multi-choice item test in foreign language research work. The population for subject recruitment was all under graduated students from second semester at large university in Iran (both male and female that study English as a compulsory paper. In Iran, English is taught as a foreign language.

  19. Natural Language Processing Technologies in Radiology Research and Clinical Applications

    Science.gov (United States)

    Cai, Tianrun; Giannopoulos, Andreas A.; Yu, Sheng; Kelil, Tatiana; Ripley, Beth; Kumamaru, Kanako K.; Rybicki, Frank J.

    2016-01-01

    The migration of imaging reports to electronic medical record systems holds great potential in terms of advancing radiology research and practice by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the heterogeneity of how these data are formatted. Indeed, although there is movement toward structured reporting in radiology (ie, hierarchically itemized reporting with use of standardized terminology), the majority of radiology reports remain unstructured and use free-form language. To effectively “mine” these large datasets for hypothesis testing, a robust strategy for extracting the necessary information is needed. Manual extraction of information is a time-consuming and often unmanageable task. “Intelligent” search engines that instead rely on natural language processing (NLP), a computer-based approach to analyzing free-form text or speech, can be used to automate this data mining task. The overall goal of NLP is to translate natural human language into a structured format (ie, a fixed collection of elements), each with a standardized set of choices for its value, that is easily manipulated by computer programs to (among other things) order into subcategories or query for the presence or absence of a finding. The authors review the fundamentals of NLP and describe various techniques that constitute NLP in radiology, along with some key applications. ©RSNA, 2016 PMID:26761536

  20. Natural Language Processing Technologies in Radiology Research and Clinical Applications.

    Science.gov (United States)

    Cai, Tianrun; Giannopoulos, Andreas A; Yu, Sheng; Kelil, Tatiana; Ripley, Beth; Kumamaru, Kanako K; Rybicki, Frank J; Mitsouras, Dimitrios

    2016-01-01

    The migration of imaging reports to electronic medical record systems holds great potential in terms of advancing radiology research and practice by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the heterogeneity of how these data are formatted. Indeed, although there is movement toward structured reporting in radiology (ie, hierarchically itemized reporting with use of standardized terminology), the majority of radiology reports remain unstructured and use free-form language. To effectively "mine" these large datasets for hypothesis testing, a robust strategy for extracting the necessary information is needed. Manual extraction of information is a time-consuming and often unmanageable task. "Intelligent" search engines that instead rely on natural language processing (NLP), a computer-based approach to analyzing free-form text or speech, can be used to automate this data mining task. The overall goal of NLP is to translate natural human language into a structured format (ie, a fixed collection of elements), each with a standardized set of choices for its value, that is easily manipulated by computer programs to (among other things) order into subcategories or query for the presence or absence of a finding. The authors review the fundamentals of NLP and describe various techniques that constitute NLP in radiology, along with some key applications. ©RSNA, 2016.

  1. Reliability and validity evidence of the Assessment of Language Use in Social Contexts for Adults (ALUSCA).

    Science.gov (United States)

    Valente, Ana Rita S; Hall, Andreia; Alvelos, Helena; Leahy, Margaret; Jesus, Luis M T

    2018-04-12

    The appropriate use of language in context depends on the speaker's pragmatic language competencies. A coding system was used to develop a specific and adult-focused self-administered questionnaire to adults who stutter and adults who do not stutter, The Assessment of Language Use in Social Contexts for Adults, with three categories: precursors, basic exchanges, and extended literal/non-literal discourse. This paper presents the content validity, item analysis, reliability coefficients and evidences of construct validity of the instrument. Content validity analysis was based on a two-stage process: first, 11 pragmatic questionnaires were assessed to identify items that probe each pragmatic competency and to create the first version of the instrument; second, items were assessed qualitatively by an expert panel composed by adults who stutter and controls, and quantitatively and qualitatively by an expert panel composed by clinicians. A pilot study was conducted with five adults who stutter and five controls to analyse items and calculate reliability. Construct validity evidences were obtained using the hypothesized relationships method and factor analysis with 28 adults who stutter and 28 controls. Concerning content validity, the questionnaires assessed up to 13 pragmatic competencies. Qualitative and quantitative analysis revealed ambiguities in items construction. Disagreement between experts was solved through item modification. The pilot study showed that the instrument presented internal consistency and temporal stability. Significant differences between adults who stutter and controls and different response profiles revealed the instrument's underlying construct. The instrument is reliable and presented evidences of construct validity.

  2. Discovery of Kolmogorov Scaling in the Natural Language

    Directory of Open Access Journals (Sweden)

    Maurice H. P. M. van Putten

    2017-05-01

    Full Text Available We consider the rate R and variance σ 2 of Shannon information in snippets of text based on word frequencies in the natural language. We empirically identify Kolmogorov’s scaling law in σ 2 ∝ k - 1 . 66 ± 0 . 12 (95% c.l. as a function of k = 1 / N measured by word count N. This result highlights a potential association of information flow in snippets, analogous to energy cascade in turbulent eddies in fluids at high Reynolds numbers. We propose R and σ 2 as robust utility functions for objective ranking of concordances in efficient search for maximal information seamlessly across different languages and as a starting point for artificial attention.

  3. Natural-language processing applied to an ITS interface

    Directory of Open Access Journals (Sweden)

    Antonio Gisolfi

    1994-12-01

    Full Text Available The aim of this paper is to show that with a subset of a natural language, simple systems running on PCs can be developed that can nevertheless be an effective tool for interfacing purposes in the building of an Intelligent Tutoring System (ITS. After presenting the special characteristics of the Smalltalk/V language, which provides an appropriate environment for the development of an interface, the overall architecture of the interface module is discussed. We then show how sentences are parsed by the interface, and how interaction takes place with the user. The knowledge-acquisition phase is subsequently described. Finally, some excerpts from a tutoring session concerned with elementary geometry are discussed, and some of the problems and limitations of the approach are illustrated.

  4. Recent Technological Advances in Natural Language Processing and Artificial Intelligence

    OpenAIRE

    Shah, Nishal Pradeepkumar

    2012-01-01

    A recent advance in computer technology has permitted scientists to implement and test algorithms that were known from quite some time (or not) but which were computationally expensive. Two such projects are IBM's Jeopardy as a part of its DeepQA project [1] and Wolfram's Wolframalpha[2]. Both these methods implement natural language processing (another goal of AI scientists) and try to answer questions as asked by the user. Though the goal of the two projects is similar, both of them have a ...

  5. Deviations in the Zipf and Heaps laws in natural languages

    Science.gov (United States)

    Bochkarev, Vladimir V.; Lerner, Eduard Yu; Shevlyakova, Anna V.

    2014-03-01

    This paper is devoted to verifying of the empirical Zipf and Hips laws in natural languages using Google Books Ngram corpus data. The connection between the Zipf and Heaps law which predicts the power dependence of the vocabulary size on the text size is discussed. In fact, the Heaps exponent in this dependence varies with the increasing of the text corpus. To explain it, the obtained results are compared with the probability model of text generation. Quasi-periodic variations with characteristic time periods of 60-100 years were also found.

  6. Deviations in the Zipf and Heaps laws in natural languages

    International Nuclear Information System (INIS)

    Bochkarev, Vladimir V; Lerner, Eduard Yu; Shevlyakova, Anna V

    2014-01-01

    This paper is devoted to verifying of the empirical Zipf and Hips laws in natural languages using Google Books Ngram corpus data. The connection between the Zipf and Heaps law which predicts the power dependence of the vocabulary size on the text size is discussed. In fact, the Heaps exponent in this dependence varies with the increasing of the text corpus. To explain it, the obtained results are compared with the probability model of text generation. Quasi-periodic variations with characteristic time periods of 60-100 years were also found

  7. Box: Natural Language Processing Research Using Amazon Web Services

    Directory of Open Access Journals (Sweden)

    Axelrod Amittai

    2015-10-01

    Full Text Available We present a publicly-available state-of-the-art research and development platform for Machine Translation and Natural Language Processing that runs on the Amazon Elastic Compute Cloud. This provides a standardized research environment for all users, and enables perfect reproducibility and compatibility. Box also enables users to use their hardware budget to avoid the management and logistical overhead of maintaining a research lab, yet still participate in global research community with the same state-of-the-art tools.

  8. Query2Question: Translating Visualization Interaction into Natural Language.

    Science.gov (United States)

    Nafari, Maryam; Weaver, Chris

    2015-06-01

    Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.

  9. Suicide Note Classification Using Natural Language Processing: A Content Analysis

    Directory of Open Access Journals (Sweden)

    John Pestian

    2010-08-01

    Full Text Available Suicide is the second leading cause of death among 25–34 year olds and the third leading cause of death among 15–25 year olds in the United States. In the Emergency Department, where suicidal patients often present, estimating the risk of repeated attempts is generally left to clinical judgment. This paper presents our second attempt to determine the role of computational algorithms in understanding a suicidal patient’s thoughts, as represented by suicide notes. We focus on developing methods of natural language processing that distinguish between genuine and elicited suicide notes. We hypothesize that machine learning algorithms can categorize suicide notes as well as mental health professionals and psychiatric physician trainees do. The data used are comprised of suicide notes from 33 suicide completers and matched to 33 elicited notes from healthy control group members. Eleven mental health professionals and 31 psychiatric trainees were asked to decide if a note was genuine or elicited. Their decisions were compared to nine different machine-learning algorithms. The results indicate that trainees accurately classified notes 49% of the time, mental health professionals accurately classified notes 63% of the time, and the best machine learning algorithm accurately classified the notes 78% of the time. This is an important step in developing an evidence-based predictor of repeated suicide attempts because it shows that natural language processing can aid in distinguishing between classes of suicidal notes.

  10. Suicide Note Classification Using Natural Language Processing: A Content Analysis.

    Science.gov (United States)

    Pestian, John; Nasrallah, Henry; Matykiewicz, Pawel; Bennett, Aurora; Leenaars, Antoon

    2010-08-04

    Suicide is the second leading cause of death among 25-34 year olds and the third leading cause of death among 15-25 year olds in the United States. In the Emergency Department, where suicidal patients often present, estimating the risk of repeated attempts is generally left to clinical judgment. This paper presents our second attempt to determine the role of computational algorithms in understanding a suicidal patient's thoughts, as represented by suicide notes. We focus on developing methods of natural language processing that distinguish between genuine and elicited suicide notes. We hypothesize that machine learning algorithms can categorize suicide notes as well as mental health professionals and psychiatric physician trainees do. The data used are comprised of suicide notes from 33 suicide completers and matched to 33 elicited notes from healthy control group members. Eleven mental health professionals and 31 psychiatric trainees were asked to decide if a note was genuine or elicited. Their decisions were compared to nine different machine-learning algorithms. The results indicate that trainees accurately classified notes 49% of the time, mental health professionals accurately classified notes 63% of the time, and the best machine learning algorithm accurately classified the notes 78% of the time. This is an important step in developing an evidence-based predictor of repeated suicide attempts because it shows that natural language processing can aid in distinguishing between classes of suicidal notes.

  11. Natural Language Processing in Radiology: A Systematic Review.

    Science.gov (United States)

    Pons, Ewoud; Braun, Loes M M; Hunink, M G Myriam; Kors, Jan A

    2016-05-01

    Radiological reporting has generated large quantities of digital content within the electronic health record, which is potentially a valuable source of information for improving clinical care and supporting research. Although radiology reports are stored for communication and documentation of diagnostic imaging, harnessing their potential requires efficient and automated information extraction: they exist mainly as free-text clinical narrative, from which it is a major challenge to obtain structured data. Natural language processing (NLP) provides techniques that aid the conversion of text into a structured representation, and thus enables computers to derive meaning from human (ie, natural language) input. Used on radiology reports, NLP techniques enable automatic identification and extraction of information. By exploring the various purposes for their use, this review examines how radiology benefits from NLP. A systematic literature search identified 67 relevant publications describing NLP methods that support practical applications in radiology. This review takes a close look at the individual studies in terms of tasks (ie, the extracted information), the NLP methodology and tools used, and their application purpose and performance results. Additionally, limitations, future challenges, and requirements for advancing NLP in radiology will be discussed. (©) RSNA, 2016 Online supplemental material is available for this article.

  12. Advanced applications of natural language processing for performing information extraction

    CERN Document Server

    Rodrigues, Mário

    2015-01-01

    This book explains how can be created information extraction (IE) applications that are able to tap the vast amount of relevant information available in natural language sources: Internet pages, official documents such as laws and regulations, books and newspapers, and social web. Readers are introduced to the problem of IE and its current challenges and limitations, supported with examples. The book discusses the need to fill the gap between documents, data, and people, and provides a broad overview of the technology supporting IE. The authors present a generic architecture for developing systems that are able to learn how to extract relevant information from natural language documents, and illustrate how to implement working systems using state-of-the-art and freely available software tools. The book also discusses concrete applications illustrating IE uses.   ·         Provides an overview of state-of-the-art technology in information extraction (IE), discussing achievements and limitations for t...

  13. Short- and long-term reliability of language fMRI.

    Science.gov (United States)

    Nettekoven, Charlotte; Reck, Nicola; Goldbrunner, Roland; Grefkes, Christian; Weiß Lucas, Carolin

    2018-08-01

    When using functional magnetic resonance imaging (fMRI) for mapping important language functions, a high test-retest reliability is mandatory, both in basic scientific research and for clinical applications. We, therefore, systematically tested the short- and long-term reliability of fMRI in a group of healthy subjects using a picture naming task and a sparse-sampling fMRI protocol. We hypothesized that test-retest reliability might be higher for (i) speech-related motor areas than for other language areas and for (ii) the short as compared to the long intersession interval. 16 right-handed subjects (mean age: 29 years) participated in three sessions separated by 2-6 (session 1 and 2, short-term) and 21-34 days (session 1 and 3, long-term). Subjects were asked to perform the same overt picture naming task in each fMRI session (50 black-white images per session). Reliability was tested using the following measures: (i) Euclidean distances (ED) between local activation maxima and Centers of Gravity (CoGs), (ii) overlap volumes and (iii) voxel-wise intraclass correlation coefficients (ICCs). Analyses were performed for three regions of interest which were chosen based on whole-brain group data: primary motor cortex (M1), superior temporal gyrus (STG) and inferior frontal gyrus (IFG). Our results revealed that the activation centers were highly reliable, independent of the time interval, ROI or hemisphere with significantly smaller ED for the local activation maxima (6.45 ± 1.36 mm) as compared to the CoGs (8.03 ± 2.01 mm). In contrast, the extent of activation revealed rather low reliability values with overlaps ranging from 24% (IFG) to 56% (STG). Here, the left hemisphere showed significantly higher overlap volumes than the right hemisphere. Although mean ICCs ranged between poor (ICC0.75) were found for all ROIs. Voxel-wise reliability of the different ROIs was influenced by the intersession interval. Taken together, we could show that, despite of

  14. Reliable JavaScript how to code safely in the world's most dangerous language

    CERN Document Server

    Spencer, Lawrence

    2015-01-01

    Create more robust applications with a test-first approach to JavaScript Reliable JavaScript, How to Code Safely in the World's Most Dangerous Language demonstrates how to create test-driven development for large-scale JavaScript applications that will stand the test of time and stay accurate through long-term use and maintenance. Taking a test-first approach to software architecture, this book walks you through several patterns and practices and explains what they are supposed to do by having you write unit tests. Write the code to pass the unit tests, so you not only develop your technique

  15. Modelling language

    CERN Document Server

    Cardey, Sylviane

    2013-01-01

    In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int

  16. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  17. Management of natural gas supply reliability and modulation in France

    International Nuclear Information System (INIS)

    Dupas, D.

    1995-01-01

    France imports most of its gas, and demand for gas varies considerably between summer and winter. Faced with insufficient flexibility in its supply contracts to deal with the gas balance, Gaz de France decided to develop a policy based on combined and consistent use of a large-scale underground storage system, a suspendable clientele, and a rate policy to maintain the balance. It was the integrated character of the company that makes it possible to optimize the arrangement of these adjustment facilities. Most of the seasonal modulation is taken up by underground storage in water tables, and the peak cold complement comes from salt dome storage. Underground storage also contributes, as does the suspendable clientele, to supply reliability, with a specific quality due to their speed and versatility of use. The prime purpose of the suspendable clientele portfolio is rather to respond to supply failures, but the demand too, during periods of extreme cold, is reduced by curtailing deliveries whose contractual suspension notice time is short. (author). 3 figs

  18. Neurolinguistics and psycholinguistics as a basis for computer acquisition of natural language

    Energy Technology Data Exchange (ETDEWEB)

    Powers, D.M.W.

    1983-04-01

    Research into natural language understanding systems for computers has concentrated on implementing particular grammars and grammatical models of the language concerned. This paper presents a rationale for research into natural language understanding systems based on neurological and psychological principles. Important features of the approach are that it seeks to place the onus of learning the language on the computer, and that it seeks to make use of the vast wealth of relevant psycholinguistic and neurolinguistic theory. 22 references.

  19. What baboons can (not) tell us about natural language grammars.

    Science.gov (United States)

    Poletiek, Fenna H; Fitz, Hartmut; Bocanegra, Bruno R

    2016-06-01

    Rey et al. (2012) present data from a study with baboons that they interpret in support of the idea that center-embedded structures in human language have their origin in low level memory mechanisms and associative learning. Critically, the authors claim that the baboons showed a behavioral preference that is consistent with center-embedded sequences over other types of sequences. We argue that the baboons' response patterns suggest that two mechanisms are involved: first, they can be trained to associate a particular response with a particular stimulus, and, second, when faced with two conditioned stimuli in a row, they respond to the most recent one first, copying behavior they had been rewarded for during training. Although Rey et al. (2012) 'experiment shows that the baboons' behavior is driven by low level mechanisms, it is not clear how the animal behavior reported, bears on the phenomenon of Center Embedded structures in human syntax. Hence, (1) natural language syntax may indeed have been shaped by low level mechanisms, and (2) the baboons' behavior is driven by low level stimulus response learning, as Rey et al. propose. But is the second evidence for the first? We will discuss in what ways this study can and cannot give evidential value for explaining the origin of Center Embedded recursion in human grammar. More generally, their study provokes an interesting reflection on the use of animal studies in order to understand features of the human linguistic system. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Natural language acquisition in large scale neural semantic networks

    Science.gov (United States)

    Ealey, Douglas

    This thesis puts forward the view that a purely signal- based approach to natural language processing is both plausible and desirable. By questioning the veracity of symbolic representations of meaning, it argues for a unified, non-symbolic model of knowledge representation that is both biologically plausible and, potentially, highly efficient. Processes to generate a grounded, neural form of this model-dubbed the semantic filter-are discussed. The combined effects of local neural organisation, coincident with perceptual maturation, are used to hypothesise its nature. This theoretical model is then validated in light of a number of fundamental neurological constraints and milestones. The mechanisms of semantic and episodic development that the model predicts are then used to explain linguistic properties, such as propositions and verbs, syntax and scripting. To mimic the growth of locally densely connected structures upon an unbounded neural substrate, a system is developed that can grow arbitrarily large, data- dependant structures composed of individual self- organising neural networks. The maturational nature of the data used results in a structure in which the perception of concepts is refined by the networks, but demarcated by subsequent structure. As a consequence, the overall structure shows significant memory and computational benefits, as predicted by the cognitive and neural models. Furthermore, the localised nature of the neural architecture also avoids the increasing error sensitivity and redundancy of traditional systems as the training domain grows. The semantic and episodic filters have been demonstrated to perform as well, or better, than more specialist networks, whilst using significantly larger vocabularies, more complex sentence forms and more natural corpora.

  1. Ulisse Aldrovandi's Color Sensibility: Natural History, Language and the Lay Color Practices of Renaissance Virtuosi.

    Science.gov (United States)

    Pugliano, Valentina

    2015-01-01

    Famed for his collection of drawings of naturalia and his thoughts on the relationship between painting and natural knowledge, it now appears that the Bolognese naturalist Ulisse Aldrovandi (1522-1605) also pondered specifically color and pigments, compiling not only lists and diagrams of color terms but also a full-length unpublished manuscript entitled De coloribus or Trattato dei colori. Introducing these writings for the first time, this article portrays a scholar not so much interested in the materiality of pigment production, as in the cultural history of hues. It argues that these writings constituted an effort to build a language of color, in the sense both of a standard nomenclature of hues and of a lexicon, a dictionary of their denotations and connotations as documented in the literature of ancients and moderns. This language would serve the naturalist in his artistic patronage and his natural historical studies, where color was considered one of the most reliable signs for the correct identification of specimens, and a guarantee of accuracy in their illustration. Far from being an exception, Aldrovandi's 'color sensibility'spoke of that of his university-educated nature-loving peers.

  2. Behind the scenes: A medical natural language processing project.

    Science.gov (United States)

    Wu, Joy T; Dernoncourt, Franck; Gehrmann, Sebastian; Tyler, Patrick D; Moseley, Edward T; Carlson, Eric T; Grant, David W; Li, Yeran; Welt, Jonathan; Celi, Leo Anthony

    2018-04-01

    Advancement of Artificial Intelligence (AI) capabilities in medicine can help address many pressing problems in healthcare. However, AI research endeavors in healthcare may not be clinically relevant, may have unrealistic expectations, or may not be explicit enough about their limitations. A diverse and well-functioning multidisciplinary team (MDT) can help identify appropriate and achievable AI research agendas in healthcare, and advance medical AI technologies by developing AI algorithms as well as addressing the shortage of appropriately labeled datasets for machine learning. In this paper, our team of engineers, clinicians and machine learning experts share their experience and lessons learned from their two-year-long collaboration on a natural language processing (NLP) research project. We highlight specific challenges encountered in cross-disciplinary teamwork, dataset creation for NLP research, and expectation setting for current medical AI technologies. Copyright © 2017. Published by Elsevier B.V.

  3. Creation of structured documentation templates using Natural Language Processing techniques.

    Science.gov (United States)

    Kashyap, Vipul; Turchin, Alexander; Morin, Laura; Chang, Frank; Li, Qi; Hongsermeier, Tonya

    2006-01-01

    Structured Clinical Documentation is a fundamental component of the healthcare enterprise, linking both clinical (e.g., electronic health record, clinical decision support) and administrative functions (e.g., evaluation and management coding, billing). One of the challenges in creating good quality documentation templates has been the inability to address specialized clinical disciplines and adapt to local clinical practices. A one-size-fits-all approach leads to poor adoption and inefficiencies in the documentation process. On the other hand, the cost associated with manual generation of documentation templates is significant. Consequently there is a need for at least partial automation of the template generation process. We propose an approach and methodology for the creation of structured documentation templates for diabetes using Natural Language Processing (NLP).

  4. Building gold standard corpora for medical natural language processing tasks.

    Science.gov (United States)

    Deleger, Louise; Li, Qi; Lingren, Todd; Kaiser, Megan; Molnar, Katalin; Stoutenborough, Laura; Kouril, Michal; Marsolo, Keith; Solti, Imre

    2012-01-01

    We present the construction of three annotated corpora to serve as gold standards for medical natural language processing (NLP) tasks. Clinical notes from the medical record, clinical trial announcements, and FDA drug labels are annotated. We report high inter-annotator agreements (overall F-measures between 0.8467 and 0.9176) for the annotation of Personal Health Information (PHI) elements for a de-identification task and of medications, diseases/disorders, and signs/symptoms for information extraction (IE) task. The annotated corpora of clinical trials and FDA labels will be publicly released and to facilitate translational NLP tasks that require cross-corpora interoperability (e.g. clinical trial eligibility screening) their annotation schemas are aligned with a large scale, NIH-funded clinical text annotation project.

  5. Pattern Recognition and Natural Language Processing: State of the Art

    Directory of Open Access Journals (Sweden)

    Mirjana Kocaleva

    2016-05-01

    Full Text Available Development of information technologies is growing steadily. With the latest software technologies development and application of the methods of artificial intelligence and machine learning intelligence embededs in computers, the expectations are that in near future computers will be able to solve problems themselves like people do. Artificial intelligence emulates human behavior on computers. Rather than executing instructions one by one, as theyare programmed, machine learning employs prior experience/data that is used in the process of system’s training. In this state of the art paper, common methods in AI, such as machine learning, pattern recognition and the natural language processing (NLP are discussed. Also are given standard architecture of NLP processing system and the level thatisneeded for understanding NLP. Lastly the statistical NLP processing and multi-word expressions are described.

  6. Constructing Concept Schemes From Astronomical Telegrams Via Natural Language Clustering

    Science.gov (United States)

    Graham, Matthew; Zhang, M.; Djorgovski, S. G.; Donalek, C.; Drake, A. J.; Mahabal, A.

    2012-01-01

    The rapidly emerging field of time domain astronomy is one of the most exciting and vibrant new research frontiers, ranging in scientific scope from studies of the Solar System to extreme relativistic astrophysics and cosmology. It is being enabled by a new generation of large synoptic digital sky surveys - LSST, PanStarrs, CRTS - that cover large areas of sky repeatedly, looking for transient objects and phenomena. One of the biggest challenges facing these is the automated classification of transient events, a process that needs machine-processible astronomical knowledge. Semantic technologies enable the formal representation of concepts and relations within a particular domain. ATELs (http://www.astronomerstelegram.org) are a commonly-used means for reporting and commenting upon new astronomical observations of transient sources (supernovae, stellar outbursts, blazar flares, etc). However, they are loose and unstructured and employ scientific natural language for description: this makes automated processing of them - a necessity within the next decade with petascale data rates - a challenge. Nevertheless they represent a potentially rich corpus of information that could lead to new and valuable insights into transient phenomena. This project lies in the cutting-edge field of astrosemantics, a branch of astroinformatics, which applies semantic technologies to astronomy. The ATELs have been used to develop an appropriate concept scheme - a representation of the information they contain - for transient astronomy using hierarchical clustering of processed natural language. This allows us to automatically organize ATELs based on the vocabulary used. We conclude that we can use simple algorithms to process and extract meaning from astronomical textual data.

  7. Emerging Approach of Natural Language Processing in Opinion Mining: A Review

    Science.gov (United States)

    Kim, Tai-Hoon

    Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. It studies the problems of automated generation and understanding of natural human languages. This paper outlines a framework to use computer and natural language techniques for various levels of learners to learn foreign languages in Computer-based Learning environment. We propose some ideas for using the computer as a practical tool for learning foreign language where the most of courseware is generated automatically. We then describe how to build Computer Based Learning tools, discuss its effectiveness, and conclude with some possibilities using on-line resources.

  8. The Reliability and Validity of Prostate Cancer Fatalism Inventory in Turkish Language.

    Science.gov (United States)

    Aydoğdu, Nihal Gördes; Çapık, Cantürk; Ersin, Fatma; Kissal, Aygul; Bahar, Zuhal

    2017-10-01

    This study aimed to conduct the reliability and validity study of the Prostate Cancer Fatalism Inventory in Turkish language. The study carried out in methodological type and consisted of 171 men. The ages of the participants ranged between 40 and 82. The content validity index was determined to be 0.80, Kaiser-Meyer-Olkin value 0.825, Bartlett's test X 2  = 750.779 and p = 0.000. Then the principal component analysis was applied to the 15-item inventory. The inventory consisted of one dimension, and the load factors were over 0.30 for all items. The explained variance of the inventory was found 33.3 %. The Kuder-Richardson-20 coefficient was determined to be 0.849 and the item-total correlations ranged between 0.335 and 0.627. The Prostate Cancer Fatalism Inventory was a reliable and valid measurement tool in Turkish language. Integrating psychological strategies for prostate cancer screening may be required to strengthen the positive effects of nursing education.

  9. Automatic Item Generation via Frame Semantics: Natural Language Generation of Math Word Problems.

    Science.gov (United States)

    Deane, Paul; Sheehan, Kathleen

    This paper is an exploration of the conceptual issues that have arisen in the course of building a natural language generation (NLG) system for automatic test item generation. While natural language processing techniques are applicable to general verbal items, mathematics word problems are particularly tractable targets for natural language…

  10. The reliability, validity, and applicability of an English language version of the Mini-ICF-APP.

    Science.gov (United States)

    Molodynski, Andrew; Linden, Michael; Juckel, George; Yeeles, Ksenija; Anderson, Catriona; Vazquez-Montes, Maria; Burns, Tom

    2013-08-01

    This study aimed at establishing the validity and reliability of an English language version of the Mini-ICF-APP. One hundred and five patients under the care of secondary mental health care services were assessed using the Mini-ICF-APP and several well-established measures of functioning and symptom severity. 47 (45 %) patients were interviewed on two occasions to ascertain test-retest reliability and 50 (48 %) were interviewed by two researchers simultaneously to determine the instrument's inter-rater reliability. Occupational and sick leave status were also recorded to assess construct validity. The Mini-ICF-APP was found to have substantial internal consistency (Chronbach's α 0.869-0.912) and all 13 items correlated highly with the total score. Analysis also showed that the Mini-ICF-APP had good test-retest (ICC 0.832) and inter-rater (ICC 0.886) reliability. No statistically significant association with length of sick leave was found, but the unemployed scored higher on the Mini ICF-APP than those in employment (mean 18.4, SD 9.1 vs. 9.4, SD 6.4, p Mini-ICF-APP correlated highly with the other measures of illness severity and functioning considered in the study. The English version of the Mini-ICF-APP is a reliable and valid measure of disorders of capacity as defined by the International Classification of Functioning. Further work is necessary to establish whether the scale could be divided into sub scales which would allow the instrument to more sensitively measure an individual's specific impairments.

  11. A natural language screening measure for motivation to change.

    Science.gov (United States)

    Miller, William R; Johnson, Wendy R

    2008-09-01

    Client motivation for change, a topic of high interest to addiction clinicians, is multidimensional and complex, and many different approaches to measurement have been tried. The current effort drew on psycholinguistic research on natural language that is used by clients to describe their own motivation. Seven addiction treatment sites participated in the development of a simple scale to measure client motivation. Twelve items were drafted to represent six potential dimensions of motivation for change that occur in natural discourse. The maximum self-rating of motivation (10 on a 0-10 scale) was the median score on all items, and 43% of respondents rated 10 on all 12 items - a substantial ceiling effect. From 1035 responses, three factors emerged representing importance, ability, and commitment - constructs that are also reflected in several theoretical models of motivation. A 3-item version of the scale, with one marker item for each of these constructs, accounted for 81% of variance in the full scale. The three items are: 1. It is important for me to . . . 2. I could . . . and 3. I am trying to . . . This offers a quick (1-minute) assessment of clients' self-reported motivation for change.

  12. Adaptation, validity, and reliability of the Preschool Language Scale-Fifth Edition (PLS-5) in the Turkish context: The Turkish Preschool Language Scale-5 (TPLS-5).

    Science.gov (United States)

    Sahli, A Sanem; Belgin, Erol

    2017-07-01

    Speech and language assessment is very important in early diagnosis of children with hearing and speech disorders. Aim of this study is to determine the validity and reliability of Preschool Language Scale (5th edition) test with its Turkish translation and adaptation. Our study is conducted on 1320 children aged between 0-7 years 11 months. While 1044 of these children have normal hearing, language and speech development, 276 of them have receptive and/or expressive language disorder. After the English-Turkish and Turkish-English translations of PLS-5 made by two experts command of both languages, some of the test items are reorganized because of the grammatical features of Turkish and the cultural structure of the country. The pilot study was conducted with 378 children. The test which is reorganized in the light of data obtained in pilot application, is applied to children chosen randomly with layering technique from different regions of Turkey, then 15 days later the first test applied again to 120 children. While 1044 of 1320 children aged between 0 and 7 years 11 months are normal, 276 of them have receptive and/or expressive language disorder. While 98 of 103 healthy children of 120 taken under the second evaluation have normal language development, 8 of 9 who used to have language development disorder in the past still remaining (Kappa coefficient:0,468, page equivalance is found as IA:0,871, IED: 0,896, TDP: 0,887. TPLS-5 is the first and only language test in our country that can evaluate receptive and/or expressive language skills of children aged between 0-7 years 11 months. Results of the study show that TPLS-5 is a valid and reliable language test for the Turkish children. Copyright © 2017. Published by Elsevier B.V.

  13. "Speaking English Naturally": The Language Ideologies of English as an Official Language at a Korean University

    Science.gov (United States)

    Choi, Jinsook

    2016-01-01

    This study explores language ideologies of English at a Korean university where English has been adopted as an official language. This study draws on ethnographic data in order to understand how speakers respond to and experience the institutional language policy. The findings show that language ideologies in this university represent the…

  14. A Classification of Sentences Used in Natural Language Processing in the Military Services.

    Science.gov (United States)

    Wittrock, Merlin C.

    Concepts in cognitive psychology are applied to the language used in military situations, and a sentence classification system for use in analyzing military language is outlined. The system is designed to be used, in part, in conjunction with a natural language query system that allows a user to access a database. The discussion of military…

  15. Crowdsourcing and curation: perspectives from biology and natural language processing.

    Science.gov (United States)

    Hirschman, Lynette; Fort, Karën; Boué, Stéphanie; Kyrpides, Nikos; Islamaj Doğan, Rezarta; Cohen, Kevin Bretonnel

    2016-01-01

    Crowdsourcing is increasingly utilized for performing tasks in both natural language processing and biocuration. Although there have been many applications of crowdsourcing in these fields, there have been fewer high-level discussions of the methodology and its applicability to biocuration. This paper explores crowdsourcing for biocuration through several case studies that highlight different ways of leveraging 'the crowd'; these raise issues about the kind(s) of expertise needed, the motivations of participants, and questions related to feasibility, cost and quality. The paper is an outgrowth of a panel session held at BioCreative V (Seville, September 9-11, 2015). The session consisted of four short talks, followed by a discussion. In their talks, the panelists explored the role of expertise and the potential to improve crowd performance by training; the challenge of decomposing tasks to make them amenable to crowdsourcing; and the capture of biological data and metadata through community editing.Database URL: http://www.mitre.org/publications/technical-papers/crowdsourcing-and-curation-perspectives. © The Author(s) 2016. Published by Oxford University Press.

  16. Arabic text preprocessing for the natural language processing applications

    International Nuclear Information System (INIS)

    Awajan, A.

    2007-01-01

    A new approach for processing vowelized and unvowelized Arabic texts in order to prepare them for Natural Language Processing (NLP) purposes is described. The developed approach is rule-based and made up of four phases: text tokenization, word light stemming, word's morphological analysis and text annotation. The first phase preprocesses the input text in order to isolate the words and represent them in a formal way. The second phase applies a light stemmer in order to extract the stem of each word by eliminating the prefixes and suffixes. The third phase is a rule-based morphological analyzer that determines the root and the morphological pattern for each extracted stem. The last phase produces an annotated text where each word is tagged with its morphological attributes. The preprocessor presented in this paper is capable of dealing with vowelized and unvowelized words, and provides the input words along with relevant linguistics information needed by different applications. It is designed to be used with different NLP applications such as machine translation text summarization, text correction, information retrieval and automatic vowelization of Arabic Text. (author)

  17. Intelligent Performance Analysis with a Natural Language Interface

    Science.gov (United States)

    Juuso, Esko K.

    2017-09-01

    Performance improvement is taken as the primary goal in the asset management. Advanced data analysis is needed to efficiently integrate condition monitoring data into the operation and maintenance. Intelligent stress and condition indices have been developed for control and condition monitoring by combining generalized norms with efficient nonlinear scaling. These nonlinear scaling methodologies can also be used to handle performance measures used for management since management oriented indicators can be presented in the same scale as intelligent condition and stress indices. Performance indicators are responses of the process, machine or system to the stress contributions analyzed from process and condition monitoring data. Scaled values are directly used in intelligent temporal analysis to calculate fluctuations and trends. All these methodologies can be used in prognostics and fatigue prediction. The meanings of the variables are beneficial in extracting expert knowledge and representing information in natural language. The idea of dividing the problems into the variable specific meanings and the directions of interactions provides various improvements for performance monitoring and decision making. The integrated temporal analysis and uncertainty processing facilitates the efficient use of domain expertise. Measurements can be monitored with generalized statistical process control (GSPC) based on the same scaling functions.

  18. A common type system for clinical natural language processing

    Directory of Open Access Journals (Sweden)

    Wu Stephen T

    2013-01-01

    Full Text Available Abstract Background One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. Results We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs, thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System versions 2.0 and later. Conclusions We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.

  19. A common type system for clinical natural language processing.

    Science.gov (United States)

    Wu, Stephen T; Kaggal, Vinod C; Dligach, Dmitriy; Masanz, James J; Chen, Pei; Becker, Lee; Chapman, Wendy W; Savova, Guergana K; Liu, Hongfang; Chute, Christopher G

    2013-01-03

    One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.

  20. Comparison of the reliability of parental reporting and the direct test of the Thai Speech and Language Test.

    Science.gov (United States)

    Prathanee, Benjamas; Angsupakorn, Nipa; Pumnum, Tawitree; Seepuaham, Cholada; Jaiyong, Pechcharat

    2012-11-01

    To find reliability of parental or caregiver's report and testing of the Thai Speech and Language Test for Children Aged 0-4 Years Old. Five investigators assessed speech and language abilities from video both contexts: parental or caregivers' report and test forms of Thai Speech and Language Test for Children Aged 0-4 Years Old. Twenty-five normal and 30 children with delayed development or risk for delayed speech and language skills were assessed at age intervals of 3, 6, 9, 12, 15, 18, 24, 30, 36 and 48 months. Reliability of parental or caregivers' testing and reporting was at a moderate level (0.41-0.60). Inter-rater reliability among investigators was excellent (0.86-1.00). The parental or caregivers' report form of the Thai Speech and Language test for Children aged 0-4 years old was an indicator for success at a moderate level. Trained professionals could use both forms of this test as reliable tools at an excellent level.

  1. Template-based generation of natural language expressions with Controlled M-Grammar

    NARCIS (Netherlands)

    Appelo, Lisette; Leermakers, M.C.J.; Rous, J.H.G.

    1993-01-01

    A method is described for the generation of related natural-language expressions. The method is based on a formal grammar of the natural language in question, specified in the Controlled M-Grammar (CMG) formalism. In the CMG framework the generation of an utterance is controlled by a derivation

  2. The Ostomy Adjustment Scale: translation into Norwegian language with validation and reliability testing.

    Science.gov (United States)

    Indrebø, Kirsten Lerum; Andersen, John Roger; Natvig, Gerd Karin

    2014-01-01

    The purpose of this study was to adapt the Ostomy Adjustment Scale to a Norwegian version and to assess its construct validity and 2 components of its reliability (internal consistency and test-retest reliability). One hundred fifty-eight of 217 patients (73%) with a colostomy, ileostomy, or urostomy participated in the study. Slightly more than half (56%) were men. Their mean age was 64 years (range, 26-91 years). All respondents had undergone ostomy surgery at least 3 months before participation in the study. The Ostomy Adjustment Scale was translated into Norwegian according to standard procedures for forward and backward translation. The questionnaire was sent to the participants via regular post. The Cronbach alpha and test-retest were computed to assess reliability. Construct validity was evaluated via correlations between each item and score sums; correlations were used to analyze relationships between the Ostomy Adjustment Scale and the 36-item Short Form Health Survey, the Quality of Life Scale, the Hospital Anxiety & Depression Scale, and the General Self-Efficacy Scale. The Cronbach alpha was 0.93, and test-retest reliability r was 0.69. The average correlation quotient item to sum score was 0.49 (range, 0.31-0.73). Results showed moderate negative correlations between the Ostomy Adjustment Scale and the Hospital Anxiety and Depression Scale (-0.37 and -0.40), and moderate positive correlations between the Ostomy Adjustment Scale and the 36-item Short Form Health Survey, the Quality of Life Scale, and the General Self-Efficacy Scale (0.30-0.45) with the exception of the pain domain in the Short Form 36 (0.28). Regression analysis showed linear associations between the Ostomy Adjustment Scale and sociodemographic and clinical variables with the exception of education. The Norwegian language version of the Ostomy Adjustment Scale was found to possess construct validity, along with internal consistency and test-retest reliability. The instrument is

  3. Adult language learning after minimal exposure to an unknown natural language

    NARCIS (Netherlands)

    Gullberg, M.; Robert, L.; Dimroth, C.; Veroude, K.; Indefrey, P.

    2010-01-01

    Despite the literature on the role of input in adult second-language (L2) acquisition and on artificial and statistical language learning, surprisingly little is known about how adults break into a new language in the wild. This article reports on a series of behavioral and neuroimaging studies that

  4. A grammar-based semantic similarity algorithm for natural language sentences.

    Science.gov (United States)

    Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  5. Natural language processing in an intelligent writing strategy tutoring system.

    Science.gov (United States)

    McNamara, Danielle S; Crossley, Scott A; Roscoe, Rod

    2013-06-01

    The Writing Pal is an intelligent tutoring system that provides writing strategy training. A large part of its artificial intelligence resides in the natural language processing algorithms to assess essay quality and guide feedback to students. Because writing is often highly nuanced and subjective, the development of these algorithms must consider a broad array of linguistic, rhetorical, and contextual features. This study assesses the potential for computational indices to predict human ratings of essay quality. Past studies have demonstrated that linguistic indices related to lexical diversity, word frequency, and syntactic complexity are significant predictors of human judgments of essay quality but that indices of cohesion are not. The present study extends prior work by including a larger data sample and an expanded set of indices to assess new lexical, syntactic, cohesion, rhetorical, and reading ease indices. Three models were assessed. The model reported by McNamara, Crossley, and McCarthy (Written Communication 27:57-86, 2010) including three indices of lexical diversity, word frequency, and syntactic complexity accounted for only 6% of the variance in the larger data set. A regression model including the full set of indices examined in prior studies of writing predicted 38% of the variance in human scores of essay quality with 91% adjacent accuracy (i.e., within 1 point). A regression model that also included new indices related to rhetoric and cohesion predicted 44% of the variance with 94% adjacent accuracy. The new indices increased accuracy but, more importantly, afford the means to provide more meaningful feedback in the context of a writing tutoring system.

  6. Automation of a problem list using natural language processing

    Directory of Open Access Journals (Sweden)

    Haug Peter J

    2005-08-01

    Full Text Available Abstract Background The medical problem list is an important part of the electronic medical record in development in our institution. To serve the functions it is designed for, the problem list has to be as accurate and timely as possible. However, the current problem list is usually incomplete and inaccurate, and is often totally unused. To alleviate this issue, we are building an environment where the problem list can be easily and effectively maintained. Methods For this project, 80 medical problems were selected for their frequency of use in our future clinical field of evaluation (cardiovascular. We have developed an Automated Problem List system composed of two main components: a background and a foreground application. The background application uses Natural Language Processing (NLP to harvest potential problem list entries from the list of 80 targeted problems detected in the multiple free-text electronic documents available in our electronic medical record. These proposed medical problems drive the foreground application designed for management of the problem list. Within this application, the extracted problems are proposed to the physicians for addition to the official problem list. Results The set of 80 targeted medical problems selected for this project covered about 5% of all possible diagnoses coded in ICD-9-CM in our study population (cardiovascular adult inpatients, but about 64% of all instances of these coded diagnoses. The system contains algorithms to detect first document sections, then sentences within these sections, and finally potential problems within the sentences. The initial evaluation of the section and sentence detection algorithms demonstrated a sensitivity and positive predictive value of 100% when detecting sections, and a sensitivity of 89% and a positive predictive value of 94% when detecting sentences. Conclusion The global aim of our project is to automate the process of creating and maintaining a problem

  7. Evaluation of PHI Hunter in Natural Language Processing Research.

    Science.gov (United States)

    Redd, Andrew; Pickard, Steve; Meystre, Stephane; Scehnet, Jeffrey; Bolton, Dan; Heavirland, Julia; Weaver, Allison Lynn; Hope, Carol; Garvin, Jennifer Hornung

    2015-01-01

    We introduce and evaluate a new, easily accessible tool using a common statistical analysis and business analytics software suite, SAS, which can be programmed to remove specific protected health information (PHI) from a text document. Removal of PHI is important because the quantity of text documents used for research with natural language processing (NLP) is increasing. When using existing data for research, an investigator must remove all PHI not needed for the research to comply with human subjects' right to privacy. This process is similar, but not identical, to de-identification of a given set of documents. PHI Hunter removes PHI from free-form text. It is a set of rules to identify and remove patterns in text. PHI Hunter was applied to 473 Department of Veterans Affairs (VA) text documents randomly drawn from a research corpus stored as unstructured text in VA files. PHI Hunter performed well with PHI in the form of identification numbers such as Social Security numbers, phone numbers, and medical record numbers. The most commonly missed PHI items were names and locations. Incorrect removal of information occurred with text that looked like identification numbers. PHI Hunter fills a niche role that is related to but not equal to the role of de-identification tools. It gives research staff a tool to reasonably increase patient privacy. It performs well for highly sensitive PHI categories that are rarely used in research, but still shows possible areas for improvement. More development for patterns of text and linked demographic tables from electronic health records (EHRs) would improve the program so that more precise identifiable information can be removed. PHI Hunter is an accessible tool that can flexibly remove PHI not needed for research. If it can be tailored to the specific data set via linked demographic tables, its performance will improve in each new document set.

  8. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. HomeNL: Homecare Assistance in Natural Language. An Intelligent Conversational Agent for Hypertensive Patients Management.

    OpenAIRE

    Rojas Barahona , Lina Maria; Quaglini , Silvana; Stefanelli , Mario

    2009-01-01

    International audience; The prospective home-care management will probably of- fer intelligent conversational assistants for supporting patients at home through natural language interfaces. Homecare assistance in natural lan- guage, HomeNL, is a proof-of-concept dialogue system for the manage- ment of patients with hypertension. It follows up a conversation with a patient in which the patient is able to take the initiative. HomeNL pro- cesses natural language, makes an internal representation...

  10. Towards multilingual access to textual databases in natural language

    International Nuclear Information System (INIS)

    Radwan, Khaled

    1994-01-01

    The Cross-Lingual Information Retrieval system (CLIR) or Multilingual Information Retrieval (MIR) has become the key issue in electronic documents management systems in a multinational environment. We propose here a multilingual information retrieval system consisting of a morpho-syntactic analyser, a transfer system from source language to target language and an information retrieval system. A thorough investigation into the system architecture and the transfer mechanisms is proposed in that report, using two different performance evaluation methods. (author) [fr

  11. Of Substance: The Nature of Language Effects on Entity Construal

    Science.gov (United States)

    Li, Peggy; Dunham, Yarrow; Carey, Susan

    2009-01-01

    Shown an entity (e.g., a plastic whisk) labeled by a novel noun in neutral syntax, speakers of Japanese, a classifier language, are more likely to assume the noun refers to the substance (plastic) than are speakers of English, a count/mass language, who are instead more likely to assume it refers to the object kind [whisk; Imai, M., & Gentner, D.…

  12. Adaptation of the Oswestry Disability Index to Kannada Language and Evaluation of Its Validity and Reliability.

    Science.gov (United States)

    Mohan, Venkatdeep; G S, Prashanth; Meravanigi, Gururaja; N, Rajagopalan; Yerramshetty, Janardhan

    2016-06-01

    A translation, cross-cultural adaptation, and validation study. The aim of this study was to translate, adapt cross-culturally, and validate the Kannada version of the Oswestry Disability Index (ODI). Low back pain is recognized as an important public health problem. Self-administered condition-specific questionnaires are important tools for assessing a patient. For low backache, the ODI is used widely. Preferred language of a region can have an effect on interpretation of questions and thus scoring. A search of literature showed no previously validated Kannada version of the ODI. Cross-cultural adaptation and translation was carried out according to previously set guidelines. Patients were recruited from the orthopedic outpatient department. They filled out a booklet containing the Kannada version of the ODI, Kannada version of the Roland Morris Disability Questionnaire (RMDQ), and a 10-point visual analog scale for pain (VASpain). The Kannada ODI was answered by 91 patients and retested in 35 patients. After removing questionnaires with stray or ambiguous markings causing difficulty in computation of scores, 76 test questionnaires and 32 retest questionnaires were available for statistical analysis. The Kannada version showed an excellent internal consistency (Cronbach's alpha = 0.92). The Kannada version of the ODI showed good correlation with the RMDQ (r = 0.72) and moderate correlation with VASpain (r = 0.58). It also showed an excellent test-retest reliability (ICC = 0.96). Standard error of measurement (SEM) was also low (4.08) and a difference of 11 points is the "Minimum Detectable Change (MDC)." The Kannada version of the ODI that was developed showed consistency and reliability. It can be used for assessment of low back pain and treatment outcomes in Kannada-speaking populations. However, in view of a smaller sample size, it will benefit from verification at multiple centers and with more patients. 3.

  13. Reliable Path Selection Problem in Uncertain Traffic Network after Natural Disaster

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2013-01-01

    Full Text Available After natural disaster, especially for large-scale disasters and affected areas, vast relief materials are often needed. In the meantime, the traffic networks are always of uncertainty because of the disaster. In this paper, we assume that the edges in the network are either connected or blocked, and the connection probability of each edge is known. In order to ensure the arrival of these supplies at the affected areas, it is important to select a reliable path. A reliable path selection model is formulated, and two algorithms for solving this model are presented. Then, adjustable reliable path selection model is proposed when the edge of the selected reliable path is broken. And the corresponding algorithms are shown to be efficient both theoretically and numerically.

  14. Statistical learning in a natural language by 8-month-old infants.

    Science.gov (United States)

    Pelucchi, Bruna; Hay, Jessica F; Saffran, Jenny R

    2009-01-01

    Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants' ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition.

  15. Applications Associated With Morphological Analysis And Generation In Natural Language Processing

    Directory of Open Access Journals (Sweden)

    Neha Yadav

    2017-08-01

    Full Text Available Natural Language Processing is one of the most developing fields in research area. In most of the applications related to the Natural Language Processing findings of the Morphological Analysis and Morphological Generation can be considered very important. As morphological study is the technique to recognise a word and its output can be used on later on stages .Keeping in view this importance this paper describes how Morphological Analysis and Morphological Generation can be proved as an important part of various Natural Language Processing fields such as Spell checker Machine Translation etc.

  16. Reliability and validity of the Perceived Stress Scale-10 in Hispanic Americans with English or Spanish language preference.

    Science.gov (United States)

    Baik, Sharon H; Fox, Rina S; Mills, Sarah D; Roesch, Scott C; Sadler, Georgia Robins; Klonoff, Elizabeth A; Malcarne, Vanessa L

    2017-01-01

    This study examined the psychometric properties of the Perceived Stress Scale-10 among 436 community-dwelling Hispanic Americans with English or Spanish language preference. Multigroup confirmatory factor analysis examined the factorial invariance of the Perceived Stress Scale-10 across language groups. Results supported a two-factor model (negative, positive) with equivalent response patterns and item intercepts but different factor covariances across languages. Internal consistency reliability of the Perceived Stress Scale-10 total and subscale scores was good in both language groups. Convergent validity was supported by expected relationships of Perceived Stress Scale-10 scores to measures of anxiety and depression. These results support the use of the Perceived Stress Scale-10 among Hispanic Americans.

  17. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    Directory of Open Access Journals (Sweden)

    Ming Che Lee

    2014-01-01

    Full Text Available This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  18. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    Science.gov (United States)

    Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952

  19. Computational Nonlinear Morphology with Emphasis on Semitic Languages. Studies in Natural Language Processing.

    Science.gov (United States)

    Kiraz, George Anton

    This book presents a tractable computational model that can cope with complex morphological operations, especially in Semitic languages, and less complex morphological systems present in Western languages. It outlines a new generalized regular rewrite rule system that uses multiple finite-state automata to cater to root-and-pattern morphology,…

  20. Multilingual natural language generation as part of a medical terminology server.

    Science.gov (United States)

    Wagner, J C; Solomon, W D; Michel, P A; Juge, C; Baud, R H; Rector, A L; Scherrer, J R

    1995-01-01

    Re-usable and sharable, and therefore language-independent concept models are of increasing importance in the medical domain. The GALEN project (Generalized Architecture for Languages Encyclopedias and Nomenclatures in Medicine) aims at developing language-independent concept representation systems as the foundations for the next generation of multilingual coding systems. For use within clinical applications, the content of the model has to be mapped to natural language. A so-called Multilingual Information Module (MM) establishes the link between the language-independent concept model and different natural languages. This text generation software must be versatile enough to cope at the same time with different languages and with different parts of a compositional model. It has to meet, on the one hand, the properties of the language as used in the medical domain and, on the other hand, the specific characteristics of the underlying model and its representation formalism. We propose a semantic-oriented approach to natural language generation that is based on linguistic annotations to a concept model. This approach is realized as an integral part of a Terminology Server, built around the concept model and offering different terminological services for clinical applications.

  1. Sensitivity and reliability of language laterality assessment with a free reversed association task - a fMRI study

    International Nuclear Information System (INIS)

    Fesl, Gunther; Brueckmann, Hartmut; Bruhns, Philipp; Rau, Sabine; Ilmberger, Josef; Wiesmann, Martin; Kegel, Gerd

    2010-01-01

    The aim of the study was to evaluate the sensitivity and reliability of assessing hemispheric language dominance with functional magnetic resonance imaging (fMRI) using a 'free reversed association task.' Thirty-nine healthy subjects (13 dextrals, 13 sinistrals and 13 bimanuals) underwent two repeated fMRI sessions. In the active phases sets of words were presented via headphones, and an associated target item was named. During the baseline phases a standard answer was given after listening to unintelligible stimuli. Data were preprocessed with SPM, and then laterality indices (LI) and reliability coefficients (RC) were calculated. Extensive frontal, temporal and parietal activations were found. Seventy-eight percent of the subjects showed left-hemispheric dominance, 5% showed right-hemispheric dominance, and 17% had bilateral language representations. The incidence of right-hemispheric language dominance was 4.3 times higher in a left-hander with a handedness quotient (HQ) of -90 than in a right-hander with a HQ of +90. The RC was 0.61 for combined ROIs (global network). Strong correlations were found between the two session LIs (r = 0.95 for the global network). 'Free reversed association' is a sensitive and reliable task for the determination of individual language lateralization. This suggests that the task may be used in a clinical setting. (orig.)

  2. Sensitivity and reliability of language laterality assessment with a free reversed association task - a fMRI study

    Energy Technology Data Exchange (ETDEWEB)

    Fesl, Gunther; Brueckmann, Hartmut [University of Munich, Department of Neuroradiology, Campus Grosshadern, Marchioninistr. 15, 81377, Munich (Germany); Bruhns, Philipp [University of Munich, Department of Neuroradiology, Campus Grosshadern, Marchioninistr. 15, 81377, Munich (Germany); University of Munich, Department of Psycholinguistics, Munich (Germany); Rau, Sabine; Ilmberger, Josef [University of Munich, Department of Physical Medicine and Rehabilitation, Munich (Germany); Wiesmann, Martin [Helios Hospitals Schwerin, Department of Radiology and Neuroradiology, Schwerin (Germany); Kegel, Gerd [University of Munich, Department of Psycholinguistics, Munich (Germany)

    2010-03-15

    The aim of the study was to evaluate the sensitivity and reliability of assessing hemispheric language dominance with functional magnetic resonance imaging (fMRI) using a 'free reversed association task.' Thirty-nine healthy subjects (13 dextrals, 13 sinistrals and 13 bimanuals) underwent two repeated fMRI sessions. In the active phases sets of words were presented via headphones, and an associated target item was named. During the baseline phases a standard answer was given after listening to unintelligible stimuli. Data were preprocessed with SPM, and then laterality indices (LI) and reliability coefficients (RC) were calculated. Extensive frontal, temporal and parietal activations were found. Seventy-eight percent of the subjects showed left-hemispheric dominance, 5% showed right-hemispheric dominance, and 17% had bilateral language representations. The incidence of right-hemispheric language dominance was 4.3 times higher in a left-hander with a handedness quotient (HQ) of -90 than in a right-hander with a HQ of +90. The RC was 0.61 for combined ROIs (global network). Strong correlations were found between the two session LIs (r = 0.95 for the global network). 'Free reversed association' is a sensitive and reliable task for the determination of individual language lateralization. This suggests that the task may be used in a clinical setting. (orig.)

  3. A natural language interface plug-in for cooperative query answering in biological databases.

    Science.gov (United States)

    Jamil, Hasan M

    2012-06-11

    One of the many unique features of biological databases is that the mere existence of a ground data item is not always a precondition for a query response. It may be argued that from a biologist's standpoint, queries are not always best posed using a structured language. By this we mean that approximate and flexible responses to natural language like queries are well suited for this domain. This is partly due to biologists' tendency to seek simpler interfaces and partly due to the fact that questions in biology involve high level concepts that are open to interpretations computed using sophisticated tools. In such highly interpretive environments, rigidly structured databases do not always perform well. In this paper, our goal is to propose a semantic correspondence plug-in to aid natural language query processing over arbitrary biological database schema with an aim to providing cooperative responses to queries tailored to users' interpretations. Natural language interfaces for databases are generally effective when they are tuned to the underlying database schema and its semantics. Therefore, changes in database schema become impossible to support, or a substantial reorganization cost must be absorbed to reflect any change. We leverage developments in natural language parsing, rule languages and ontologies, and data integration technologies to assemble a prototype query processor that is able to transform a natural language query into a semantically equivalent structured query over the database. We allow knowledge rules and their frequent modifications as part of the underlying database schema. The approach we adopt in our plug-in overcomes some of the serious limitations of many contemporary natural language interfaces, including support for schema modifications and independence from underlying database schema. The plug-in introduced in this paper is generic and facilitates connecting user selected natural language interfaces to arbitrary databases using a

  4. Children's Foreign Language Anxiety Scale: Preliminary Tests of Reliability and Validity

    Science.gov (United States)

    Aydin, Selami; Harputlu, Leyla; Güzel, Serhat; Ustuk, Özgehan; Savran Çelik, Seyda; Genç, Deniz

    2016-01-01

    Foreign language anxiety (FLA), which constitutes a serious problem in the foreign language learning process, has been mainly seen as a research issue regarding adult language learners, while it has been overlooked in children. This is because there is no an appropriate tool to measure FLA among children, whereas there are many studies on the…

  5. The Children's Foreign Language Anxiety Scale: Reliability and Validity

    Science.gov (United States)

    Aydin, Selami; Harputlu, Leyla; Ustuk, Özgehan; Güzel, Serhat; Çelik, Seyda Savran

    2017-01-01

    Foreign language anxiety (FLA) has been mainly associated with adult language learners. Although FLA forms a serious problem in the foreign language learning process for all learners, the effects of FLA on children have been mainly overlooked. The underlying reason is that there is a lack of an appropriate measurement tool for FLA among children.…

  6. Language-Centered Social Studies: A Natural Integration.

    Science.gov (United States)

    Barrera, Rosalinda B.; Aleman, Magdalena

    1983-01-01

    Described is a newspaper project in which elementary students report life as it was in the Middle Ages. Students are involved in a variety of language-centered activities. For example, they gather and evaluate information about medieval times and write, edit, and proofread articles for the newspaper. (RM)

  7. From Monologue to Dialogue: Natural Language Generation in OVIS

    NARCIS (Netherlands)

    Theune, Mariet; Freedman, R.; Callaway, C.

    This paper describes how a language generation system that was originally designed for monologue generation, has been adapted for use in the OVIS spoken dialogue system. To meet the requirement that in a dialogue, the system’s utterances should make up a single, coherent dialogue turn, several

  8. Where humans meet machines innovative solutions for knotty natural-language problems

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Where Humans Meet Machines: Innovative Solutions for Knotty Natural-Language Problems brings humans and machines closer together by showing how linguistic complexities that confound the speech systems of today can be handled effectively by sophisticated natural-language technology. Some of the most vexing natural-language problems that are addressed in this book entail   recognizing and processing idiomatic expressions, understanding metaphors, matching an anaphor correctly with its antecedent, performing word-sense disambiguation, and handling out-of-vocabulary words and phrases. This fourteen-chapter anthology consists of contributions from industry scientists and from academicians working at major universities in North America and Europe. They include researchers who have played a central role in DARPA-funded programs and developers who craft real-world solutions for corporations. These contributing authors analyze the role of natural language technology in the global marketplace; they explore the need f...

  9. Integrating deep and shallow natural language processing components : representations and hybrid architectures

    OpenAIRE

    Schäfer, Ulrich

    2006-01-01

    We describe basic concepts and software architectures for the integration of shallow and deep (linguistics-based, semantics-oriented) natural language processing (NLP) components. The main goal of this novel, hybrid integration paradigm is improving robustness of deep processing. After an introduction to constraint-based natural language parsing, we give an overview of typical shallow processing tasks. We introduce XML standoff markup as an additional abstraction layer that eases integration ...

  10. Designing Service-Oriented Chatbot Systems Using a Construction Grammar-Driven Natural Language Generation System

    OpenAIRE

    Jenkins, Marie-Claire

    2011-01-01

    Service oriented chatbot systems are used to inform users in a conversational manner about a particular service or product on a website. Our research shows that current systems are time consuming to build and not very accurate or satisfying to users. We find that natural language understanding and natural language generation methods are central to creating an e�fficient and useful system. In this thesis we investigate current and past methods in this research area and place particular emph...

  11. Reliability analysis of 2400 MWth gas-cooled fast reactor natural circulation decay heat removal system

    International Nuclear Information System (INIS)

    Marques, M.; Bassi, C.; Bentivoglio, F.

    2012-01-01

    In support to a PSA (Probability Safety Assessment) performed at the design level on the 2400 MWth Gas-cooled Fast Reactor, the functional reliability of the decay heat removal system (DHR) working in natural circulation has been estimated in two transient situations corresponding to an 'aggravated' Loss of Flow Accident (LOFA) and a Loss of Coolant Accident (LOCA). The reliability analysis was based on the RMPS methodology. Reliability and global sensitivity analyses use uncertainty propagation by Monte Carlo techniques. The DHR system consists of 1) 3 dedicated DHR loops: the choice of 3 loops (3*100% redundancy) is made in assuming that one could be lost due to the accident initiating event (break for example) and that another one must be supposed unavailable (single failure criterion); 2) a metallic guard containment enclosing the primary system (referred as close containment), not pressurized in normal operation, having a free volume such as the fast primary helium expansion gives an equilibrium pressure of 1.0 MPa, in the first part of the transient (few hours). Each dedicated DHR loop designed to work in forced circulation with blowers or in natural circulation, is composed of 1) a primary loop (cross-duct connected to the core vessel), with a driving height of 10 meters between core and DHX mid-plan; 2) a secondary circuit filled with pressurized water at 1.0 MPa (driving height of 5 meters for natural circulation DHR); 3) a ternary pool, initially at 50 C. degrees, whose volume is determined to handle one day heat extraction (after this time delay, additional measures are foreseen to fill up the pool). The results obtained on the reliability of the DHR system and on the most important input parameters are very different from one scenario to the other showing the necessity for the PSA to perform specific reliability analysis of the passive system for each considered scenario. The analysis shows that the DHR system working in natural circulation is

  12. Concreteness and Psychological Distance in Natural Language Use.

    Science.gov (United States)

    Snefjella, Bryor; Kuperman, Victor

    2015-09-01

    Existing evidence shows that more abstract mental representations are formed and more abstract language is used to characterize phenomena that are more distant from the self. Yet the precise form of the functional relationship between distance and linguistic abstractness is unknown. In four studies, we tested whether more abstract language is used in textual references to more geographically distant cities (Study 1), time points further into the past or future (Study 2), references to more socially distant people (Study 3), and references to a specific topic (Study 4). Using millions of linguistic productions from thousands of social-media users, we determined that linguistic concreteness is a curvilinear function of the logarithm of distance, and we discuss psychological underpinnings of the mathematical properties of this relationship. We also demonstrated that gradient curvilinear effects of geographic and temporal distance on concreteness are nearly identical, which suggests uniformity in representation of abstractness along multiple dimensions. © The Author(s) 2015.

  13. Natural Language Processing with Small Feed-Forward Networks

    OpenAIRE

    Botha, Jan A.; Pitler, Emily; Ma, Ji; Bakalov, Anton; Salcianu, Alex; Weiss, David; McDonald, Ryan; Petrov, Slav

    2017-01-01

    We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory...

  14. From Monologue to Dialogue: Natural Language Generation in OVIS

    OpenAIRE

    Theune, Mariet; Freedman, R.; Callaway, C.

    2003-01-01

    This paper describes how a language generation system that was originally designed for monologue generation, has been adapted for use in the OVIS spoken dialogue system. To meet the requirement that in a dialogue, the system’s utterances should make up a single, coherent dialogue turn, several modifications had to be made to the system. The paper also discusses the influence of dialogue context on information status, and its consequences for the generation of referring expressions and accentu...

  15. Inferring Speaker Affect in Spoken Natural Language Communication

    OpenAIRE

    Pon-Barry, Heather Roberta

    2012-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, ...

  16. Crowdsourcing a normative natural language dataset: a comparison of Amazon Mechanical Turk and in-lab data collection.

    Science.gov (United States)

    Saunders, Daniel R; Bex, Peter J; Woods, Russell L

    2013-05-20

    to quickly and economically collect a large reliable dataset of normative natural language responses.

  17. The oscillopathic nature of language deficits in autism: from genes to language evolution

    Directory of Open Access Journals (Sweden)

    Antonio eBenítez-Burraco

    2016-03-01

    Full Text Available Autism spectrum disorders (ASD are pervasive neurodevelopmental disorders involving a number of deficits to linguistic cognition. The gap between genetics and the pathophysiology of ASD remains open, in particular regarding its distinctive linguistic profile. The goal of this paper is to attempt to bridge this gap, focusing on how the autistic brain processes language, particularly through the perspective of brain rhythms. Due to the phenomenon of pleiotropy, which may take some decades to overcome, we believe that studies of brain rhythms, which are not faced with problems of this scale, may constitute a more tractable route to interpreting language deficits in ASD and eventually other neurocognitive disorders. Building on recent attempts to link neural oscillations to certain computational primitives of language, we show that interpreting language deficits in ASD as oscillopathic traits is a potentially fruitful way to construct successful endophenotypes of this condition. Additionally, we will show that candidate genes for ASD are overrepresented among the genes that played a role in the evolution of language. These genes include (and are related to genes involved in brain rhythmicity. We hope that the type of steps taken here will additionally lead to a better understanding of the comorbidity, heterogeneity, and variability of ASD, and may help achieve a better treatment of the affected populations.

  18. Reliability and validity of the Pragmatics Observational Measure (POM): a new observational measure of pragmatic language for children.

    Science.gov (United States)

    Cordier, Reinie; Munro, Natalie; Wilkes-Gillan, Sarah; Speyer, Renée; Pearce, Wendy M

    2014-07-01

    There is a need for a reliable and valid assessment of childhood pragmatic language skills during peer-peer interactions. This study aimed to evaluate the psychometric properties of a newly developed pragmatic assessment, the Pragmatic Observational Measure (POM). The psychometric properties of the POM were investigated from observational data of two studies - study 1 involved 342 children aged 5-11 years (108 children with ADHD; 108 typically developing playmates; 126 children in the control group), and study 2 involved 9 children with ADHD who attended a 7-week play-based intervention. The psychometric properties of the POM were determined based on the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) taxonomy of psychometric properties and definitions for health-related outcomes; the Pragmatic Protocol was used as the reference tool against which the POM was evaluated. The POM demonstrated sound psychometric properties in all the reliability, validity and interpretability criteria against which it was assessed. The findings showed that the POM is a reliable and valid measure of pragmatic language skills of children with ADHD between the age of 5 and 11 years and has clinical utility in identifying children with pragmatic language difficulty. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. From language to nature: The semiotic metaphor in biology

    DEFF Research Database (Denmark)

    Emmeche, Claus; Hoffmeyer, Jesper Normann

    1991-01-01

    be of considerable value, not only heuristically, but in order to comprehend the irreducible nature of living organisms. In arguing for a semiotic perspective on living nature, it makes a marked difference whether the departure is made from the tradition of F. de Saussure´s structural linguistics or from...

  20. Natural circulation in water cooled nuclear power plants: Phenomena, models, and methodology for system reliability assessments

    International Nuclear Information System (INIS)

    2005-11-01

    In recent years it has been recognized that the application of passive safety systems (i.e. those whose operation takes advantage of natural forces such as convection and gravity), can contribute to simplification and potentially to improved economics of new nuclear power plant designs. Further, the IAEA Conference on The Safety of Nuclear Power: Strategy for the Future which was convened in 1991 noted that for new plants 'the use of passive safety features is a desirable method of achieving simplification and increasing the reliability of the performance of essential safety functions, and should be used wherever appropriate'. Considering the weak driving forces of passive systems based on natural circulation, careful design and analysis methods must be employed to assure that the systems perform their intended functions. To support the development of advanced water cooled reactor designs with passive systems, investigations of natural circulation are an ongoing activity in several IAEA Member States. Some new designs also utilize natural circulation as a means to remove core power during normal operation. In response to the motivating factors discussed above, and to foster international collaboration on the enabling technology of passive systems that utilize natural circulation, an IAEA Coordinated Research Project (CRP) on Natural Circulation Phenomena, Modelling and Reliability of Passive Systems that Utilize Natural Circulation was started in early 2004. Building on the shared expertise within the CRP, this publication presents extensive information on natural circulation phenomena, models, predictive tools and experiments that currently support design and analyses of natural circulation systems and highlights areas where additional research is needed. Therefore, this publication serves both to provide a description of the present state of knowledge on natural circulation in water cooled nuclear power plants and to guide the planning and conduct of the CRP in

  1. The Neck Disability Index-Russian Language Version (NDI-RU): A Study of Validity and Reliability.

    Science.gov (United States)

    Bakhtadze, Maxim A; Vernon, Howard; Zakharova, Olga B; Kuzminov, Kirill O; Bolotov, Dmitry A

    2015-07-15

    Cross-cultural adaptation and psychometric testing. To perform a validated Russian translation and then to evaluate the validity and reliability of the Russian language version of the Neck Disability Index (NDI-RU). Neck pain is highly prevalent and can greatly affect daily activity. The Neck Disability Index (NDI) is the most frequently used scale for self-rating of disability due to neck pain. Its translated versions are applied in many countries. However, the Russian language version of the NDI has not been developed yet. Cross-cultural adaptation of the NDI-RU was performed according to established guidelines. Then, the NDI-RU was evaluated for content validity, concurrent criterion validity, internal consistency, test-retest reliability, factor structure, and minimum detectable change. Two hundred thirty-two patients took part in the study in total: 109 in validity (39.5 ± 10 yr), 123 in reliability (38.4 ± 11 yr; 80 in the test-retest phase). A culturally valid translation was achieved. NDI-RU total scores were distributed normally. Floor/ceiling effects were absent. Good values of Cronbach α were obtained for each item (from 0.80 to 0.84) and for the total NDI-RU (0.83). A 2-factor solution was found for the NDI-RU. The average interitem correlation coefficient was 0.53. Intraclass correlation coefficients for test-retest reliability coefficients ranged from 0.65 to 0.92 for different items and 0.91 for the total NDI-RU. Moderate correlation (Spearman rs = 0.62; P Russian language version of the Neck Disability Index resulted in a valid, reliable instrument that can be used both in clinical practice and scientific investigations. 1.

  2. Semantic similarity from natural language and ontology analysis

    CERN Document Server

    Harispe, Sébastien; Janaqi, Stefan

    2015-01-01

    Artificial Intelligence federates numerous scientific fields in the aim of developing machines able to assist human operators performing complex treatments---most of which demand high cognitive skills (e.g. learning or decision processes). Central to this quest is to give machines the ability to estimate the likeness or similarity between things in the way human beings estimate the similarity between stimuli.In this context, this book focuses on semantic measures: approaches designed for comparing semantic entities such as units of language, e.g. words, sentences, or concepts and instances def

  3. Reliability of the Dutch-language version of the Communication Function Classification System and its association with language comprehension and method of communication.

    Science.gov (United States)

    Vander Zwart, Karlijn E; Geytenbeek, Joke J; de Kleijn, Maaike; Oostrom, Kim J; Gorter, Jan Willem; Hidecker, Mary Jo Cooley; Vermeulen, R Jeroen

    2016-02-01

    The aims of this study were to determine the intra- and interrater reliability of the Dutch-language version of the Communication Function Classification System (CFCS-NL) and to investigate the association between the CFCS level and (1) spoken language comprehension and (2) preferred method of communication in children with cerebral palsy (CP). Participants were 93 children with CP (50 males, 43 females; mean age 7y, SD 2y 6mo, range 2y 9mo-12y 10mo; unilateral spastic [n=22], bilateral spastic [n=51], dyskinetic [n=15], ataxic [n=3], not specified [n=2]; Gross Motor Function Classification System level I [n=16], II [n=14], III, [n=7], IV [n=24], V [n=31], unknown [n=1]), recruited from rehabilitation centres throughout the Netherlands. Because some centres only contributed to part of the study, different numbers of participants are presented for different aspects of the study. Parents and speech and language therapists (SLTs) classified the communication level using the CFCS. Kappa was used to determine the intra- and interrater reliability. Spearman's correlation coefficient was used to determine the association between CFCS level and spoken language comprehension, and Fisher's exact test was used to examine the association between the CFCS level and method of communication. Interrater reliability of the CFCS-NL between parents and SLTs was fair (r=0.54), between SLTs good (r=0.78), and the intrarater (SLT) reliability very good (r=0.85). The association between the CFCS and spoken language comprehension was strong for SLTs (r=0.63) and moderate for parents (r=0.51). There was a statistically significant difference between the CFCS level and the preferred method of communication of the child (pcommunication in children with CP. Preferably, professionals should classify the child's CFCS level in collaboration with the parents to acquire the most comprehensive information about the everyday communication of the child in various situations both with familiar and

  4. Database Capture of Natural Language Echocardiographic Reports: A Unified Medical Language System Approach

    OpenAIRE

    Canfield, K.; Bray, B.; Huff, S.; Warner, H.

    1989-01-01

    We describe a prototype system for semi-automatic database capture of free-text echocardiography reports. The system is very simple and uses a Unified Medical Language System compatible architecture. We use this system and a large body of texts to create a patient database and develop a comprehensive hierarchical dictionary for echocardiography.

  5. Telehealth language assessments using consumer grade equipment in rural and urban settings: Feasible, reliable and well tolerated.

    Science.gov (United States)

    Sutherland, Rebecca; Trembath, David; Hodge, Antoinette; Drevensek, Suzi; Lee, Sabrena; Silove, Natalie; Roberts, Jacqueline

    2017-01-01

    Introduction Telehealth can be an effective way to provide speech pathology intervention to children with speech and language impairments. However, the provision of reliable and feasible standardised language assessments via telehealth to establish children's needs for intervention and to monitor progress has not yet been well established. Further, there is limited information about children's reactions to telehealth. This study aimed to examine the reliability and feasibility of conducting standardised language assessment with school-aged children with known or suspected language impairment via a telehealth application using consumer grade computer equipment within a public school setting. Method Twenty-three children (aged 8-12 years) participated. Each child was assessed using a standardised language assessment comprising six subtests. Two subtests were administered by a speech pathologist face-to-face (local clinician) and four subtests were administered via telehealth. All subtests were completed within a single visit to the clinic service, with a break between the face to face and telehealth sessions. The face-to-face clinician completed behaviour observation checklists in the telehealth and face to face conditions and provided feedback on the audio and video quality of the application from the child's point of view. Parent feedback about their child's experience was elicited via survey. Results There was strong inter-rater reliability in the telehealth and face-to-face conditions (correlation coefficients ranged from r = 0.96-1.0 across the subtests) and good agreement on all measures. Similar levels of attention, distractibility and anxiety were observed in the two conditions. Clinicians rated only one session of 23 as having poor audio quality and no sessions were rated as having poor visual quality. Parent and child reactions to the use of telehealth were largely positive and supportive of using telehealth to assess rural children. Discussion The

  6. Using Edit Distance to Analyse Errors in a Natural Language to Logic Translation Corpus

    Science.gov (United States)

    Barker-Plummer, Dave; Dale, Robert; Cox, Richard; Romanczuk, Alex

    2012-01-01

    We have assembled a large corpus of student submissions to an automatic grading system, where the subject matter involves the translation of natural language sentences into propositional logic. Of the 2.3 million translation instances in the corpus, 286,000 (approximately 12%) are categorized as being in error. We want to understand the nature of…

  7. The cosmic code quantum physics as the language of nature

    CERN Document Server

    Pagels, Heinz R

    2012-01-01

    ""The Cosmic Code can be read by anyone. I heartily recommend it!"" - The New York Times Book Review""A reliable guide for the nonmathematical reader across the highest ridges of physical theory. Pagels is unfailingly lighthearted and confident."" - Scientific American""A sound, clear, vital work that deserves the attention of anyone who takes an interest in the relationship between material reality and the human mind."" - Science 82This is one of the most important books on quantum mechanics ever written for general readers. Heinz Pagels, an eminent physicist and science writer, discusses and

  8. Visual statistical learning is related to natural language ability in adults: An ERP study.

    Science.gov (United States)

    Daltrozzo, Jerome; Emerson, Samantha N; Deocampo, Joanne; Singh, Sonia; Freggens, Marjorie; Branum-Martin, Lee; Conway, Christopher M

    2017-03-01

    Statistical learning (SL) is believed to enable language acquisition by allowing individuals to learn regularities within linguistic input. However, neural evidence supporting a direct relationship between SL and language ability is scarce. We investigated whether there are associations between event-related potential (ERP) correlates of SL and language abilities while controlling for the general level of selective attention. Seventeen adults completed tests of visual SL, receptive vocabulary, grammatical ability, and sentence completion. Response times and ERPs showed that SL is related to receptive vocabulary and grammatical ability. ERPs indicated that the relationship between SL and grammatical ability was independent of attention while the association between SL and receptive vocabulary depended on attention. The implications of these dissociative relationships in terms of underlying mechanisms of SL and language are discussed. These results further elucidate the cognitive nature of the links between SL mechanisms and language abilities. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Evaluation of uncertainty in the measurement of sense of natural language constructions

    Directory of Open Access Journals (Sweden)

    Bisikalo Oleg V.

    2017-01-01

    Full Text Available The task of evaluating uncertainty in the measurement of sense in natural language constructions (NLCs was researched through formalization of the notions of the language image, formalization of artificial cognitive systems (ACSs and the formalization of units of meaning. The method for measuring the sense of natural language constructions incorporated fuzzy relations of meaning, which ensures that information about the links between lemmas of the text is taken into account, permitting the evaluation of two types of measurement uncertainty of sense characteristics. Using developed applications programs, experiments were conducted to investigate the proposed method to tackle the identification of informative characteristics of text. The experiments resulted in dependencies of parameters being obtained in order to utilise the Pareto distribution law to define relations between lemmas, analysis of which permits the identification of exponents of an average number of connections of the language image as the most informative characteristics of text.

  10. Deciphering the language of nature: cryptography, secrecy, and alterity in Francis Bacon.

    Science.gov (United States)

    Clody, Michael C

    2011-01-01

    The essay argues that Francis Bacon's considerations of parables and cryptography reflect larger interpretative concerns of his natural philosophic project. Bacon describes nature as having a language distinct from those of God and man, and, in so doing, establishes a central problem of his natural philosophy—namely, how can the language of nature be accessed through scientific representation? Ultimately, Bacon's solution relies on a theory of differential and duplicitous signs that conceal within them the hidden voice of nature, which is best recognized in the natural forms of efficient causality. The "alphabet of nature"—those tables of natural occurrences—consequently plays a central role in his program, as it renders nature's language susceptible to a process and decryption that mirrors the model of the bilateral cipher. It is argued that while the writing of Bacon's natural philosophy strives for literality, its investigative process preserves a space for alterity within scientific representation, that is made accessible to those with the interpretative key.

  11. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  12. Dependency distance: A new perspective on syntactic patterns in natural languages

    Science.gov (United States)

    Liu, Haitao; Xu, Chunshan; Liang, Junying

    2017-07-01

    Dependency distance, measured by the linear distance between two syntactically related words in a sentence, is generally held as an important index of memory burden and an indicator of syntactic difficulty. Since this constraint of memory is common for all human beings, there may well be a universal preference for dependency distance minimization (DDM) for the sake of reducing memory burden. This human-driven language universal is supported by big data analyses of various corpora that consistently report shorter overall dependency distance in natural languages than in artificial random languages and long-tailed distributions featuring a majority of short dependencies and a minority of long ones. Human languages, as complex systems, seem to have evolved to come up with diverse syntactic patterns under the universal pressure for dependency distance minimization. However, there always exist a small number of long-distance dependencies in natural languages, which may reflect some other biological or functional constraints. Language system may adapt itself to these sporadic long-distance dependencies. It is these universal constraints that have shaped such a rich diversity of syntactic patterns in human languages.

  13. Dependency distance: A new perspective on syntactic patterns in natural languages.

    Science.gov (United States)

    Liu, Haitao; Xu, Chunshan; Liang, Junying

    2017-07-01

    Dependency distance, measured by the linear distance between two syntactically related words in a sentence, is generally held as an important index of memory burden and an indicator of syntactic difficulty. Since this constraint of memory is common for all human beings, there may well be a universal preference for dependency distance minimization (DDM) for the sake of reducing memory burden. This human-driven language universal is supported by big data analyses of various corpora that consistently report shorter overall dependency distance in natural languages than in artificial random languages and long-tailed distributions featuring a majority of short dependencies and a minority of long ones. Human languages, as complex systems, seem to have evolved to come up with diverse syntactic patterns under the universal pressure for dependency distance minimization. However, there always exist a small number of long-distance dependencies in natural languages, which may reflect some other biological or functional constraints. Language system may adapt itself to these sporadic long-distance dependencies. It is these universal constraints that have shaped such a rich diversity of syntactic patterns in human languages. Copyright © 2017. Published by Elsevier B.V.

  14. Analyzing the Gap between Workflows and their Natural Language Descriptions

    NARCIS (Netherlands)

    Groth, P.T.; Gil, Y

    2009-01-01

    Scientists increasingly use workflows to represent and share their computational experiments. Because of their declarative nature, focus on pre-existing component composition and the availability of visual editors, workflows provide a valuable start for creating user-friendly environments for end

  15. Research in Knowledge Representation for Natural Language Understanding

    Science.gov (United States)

    1983-10-01

    how a Concept specializes its subsumer. |C|ANIMAL. |C|PLANT. |C(PERSON, and |C| UNICORN are natural kinds, and so will need a PrimitiveClass. As...build this proof, we must build a proof of p x (p X n) steps. The size of the proofs grows exponentially with the depth of nesting This :s clearly

  16. Never-Ending Learning for Deep Understanding of Natural Language

    Science.gov (United States)

    2017-10-01

    fundamental to knowledge management problems. In [Wijaya13] presented a novel approach to this ontology alignment problem that employs a very large natural...to them. This report is the result of contracted fundamental research deemed exempt from public affairs security and policy review in accordance...S / ALEKSEY PANASYUK MICHAEL J. WESSING Work Unit Manager Deputy Chief, Information Intelligence Systems & Analysis Division Information

  17. Linguistic fundamentals for natural language processing 100 essentials from morphology and syntax

    CERN Document Server

    Bender, Emily M

    2013-01-01

    Many NLP tasks have at their core a subtask of extracting the dependencies-who did what to whom-from natural language sentences. This task can be understood as the inverse of the problem solved in different ways by diverse human languages, namely, how to indicate the relationship between different parts of a sentence. Understanding how languages solve the problem can be extremely useful in both feature design and error analysis in the application of machine learning to NLP. Likewise, understanding cross-linguistic variation can be important for the design of MT systems and other multilingual a

  18. Stochastic Model for the Vocabulary Growth in Natural Languages

    Directory of Open Access Journals (Sweden)

    Martin Gerlach

    2013-05-01

    Full Text Available We propose a stochastic model for the number of different words in a given database which incorporates the dependence on the database size and historical changes. The main feature of our model is the existence of two different classes of words: (i a finite number of core words, which have higher frequency and do not affect the probability of a new word to be used, and (ii the remaining virtually infinite number of noncore words, which have lower frequency and, once used, reduce the probability of a new word to be used in the future. Our model relies on a careful analysis of the Google Ngram database of books published in the last centuries, and its main consequence is the generalization of Zipf’s and Heaps’ law to two-scaling regimes. We confirm that these generalizations yield the best simple description of the data among generic descriptive models and that the two free parameters depend only on the language but not on the database. From the point of view of our model, the main change on historical time scales is the composition of the specific words included in the finite list of core words, which we observe to decay exponentially in time with a rate of approximately 30 words per year for English.

  19. Natural Circulation in Water Cooled Nuclear Power Plants Phenomena, models, and methodology for system reliability assessments

    Energy Technology Data Exchange (ETDEWEB)

    Jose Reyes

    2005-02-14

    In recent years it has been recognized that the application of passive safety systems (i.e., those whose operation takes advantage of natural forces such as convection and gravity), can contribute to simplification and potentially to improved economics of new nuclear power plant designs. In 1991 the IAEA Conference on ''The Safety of Nuclear Power: Strategy for the Future'' noted that for new plants the use of passive safety features is a desirable method of achieving simplification and increasing the reliability of the performance of essential safety functions, and should be used wherever appropriate''.

  20. Natural Circulation in Water Cooled Nuclear Power Plants Phenomena, models, and methodology for system reliability assessments

    International Nuclear Information System (INIS)

    Jose Reyes

    2005-01-01

    In recent years it has been recognized that the application of passive safety systems (i.e., those whose operation takes advantage of natural forces such as convection and gravity), can contribute to simplification and potentially to improved economics of new nuclear power plant designs. In 1991 the IAEA Conference on ''The Safety of Nuclear Power: Strategy for the Future'' noted that for new plants the use of passive safety features is a desirable method of achieving simplification and increasing the reliability of the performance of essential safety functions, and should be used wherever appropriate''

  1. Reliability of the Test of Integrated Language and Literacy Skills (TILLS)

    Science.gov (United States)

    Mailend, Marja-Liisa; Plante, Elena; Anderson, Michele A.; Applegate, E. Brooks; Nelson, Nickola W.

    2016-01-01

    Background: As new standardized tests become commercially available, it is critical that clinicians have access to the information about a test's psychometric properties, including aspects of reliability. Aims: The purpose of the three studies reported in this article was to investigate the reliability of a new test, the Test of Integrated…

  2. An algorithm to transform natural language into SQL queries for relational databases

    Directory of Open Access Journals (Sweden)

    Garima Singh

    2016-09-01

    Full Text Available Intelligent interface, to enhance efficient interactions between user and databases, is the need of the database applications. Databases must be intelligent enough to make the accessibility faster. However, not every user familiar with the Structured Query Language (SQL queries as they may not aware of structure of the database and they thus require to learn SQL. So, non-expert users need a system to interact with relational databases in their natural language such as English. For this, Database Management System (DBMS must have an ability to understand Natural Language (NL. In this research, an intelligent interface is developed using semantic matching technique which translates natural language query to SQL using set of production rules and data dictionary. The data dictionary consists of semantics sets for relations and attributes. A series of steps like lower case conversion, tokenization, speech tagging, database element and SQL element extraction is used to convert Natural Language Query (NLQ to SQL Query. The transformed query is executed and the results are obtained by the user. Intelligent Interface is the need of database applications to enhance efficient interaction between user and DBMS.

  3. Selecting the Best Mobile Information Service with Natural Language User Input

    Science.gov (United States)

    Feng, Qiangze; Qi, Hongwei; Fukushima, Toshikazu

    Information services accessed via mobile phones provide information directly relevant to subscribers’ daily lives and are an area of dynamic market growth worldwide. Although many information services are currently offered by mobile operators, many of the existing solutions require a unique gateway for each service, and it is inconvenient for users to have to remember a large number of such gateways. Furthermore, the Short Message Service (SMS) is very popular in China and Chinese users would prefer to access these services in natural language via SMS. This chapter describes a Natural Language Based Service Selection System (NL3S) for use with a large number of mobile information services. The system can accept user queries in natural language and navigate it to the required service. Since it is difficult for existing methods to achieve high accuracy and high coverage and anticipate which other services a user might want to query, the NL3S is developed based on a Multi-service Ontology (MO) and Multi-service Query Language (MQL). The MO and MQL provide semantic and linguistic knowledge, respectively, to facilitate service selection for a user query and to provide adaptive service recommendations. Experiments show that the NL3S can achieve 75-95% accuracies and 85-95% satisfactions for processing various styles of natural language queries. A trial involving navigation of 30 different mobile services shows that the NL3S can provide a viable commercial solution for mobile operators.

  4. MILROY, Lesley. Observing and Analysing Natural Language: A Critical Account of Sociolinguistic Method. Oxford: Basil Blackwell, 1987. 230pp. MILROY, Lesley. Observing and Analysing Natural Language: A Critical Account of Sociolinguistic Method. Oxford: Basil Blackwell, 1987. 230pp.

    Directory of Open Access Journals (Sweden)

    Iria Werlang Garcia

    2008-04-01

    Full Text Available Lesley Milroy's Observing and Analysing Natural Language is a recent addition to an ever growing number of publications in the field of Sociolinguistics. It carries the weight of one of the experienced authors in the current days in the specified field and should offer basic information to both newcomers and established investigators in natural language. Lesley Milroy's Observing and Analysing Natural Language is a recent addition to an ever growing number of publications in the field of Sociolinguistics. It carries the weight of one of the experienced authors in the current days in the specified field and should offer basic information to both newcomers and established investigators in natural language.

  5. Research in Knowledge Representation for Natural Language Understanding

    Science.gov (United States)

    1981-11-01

    interpretation would not be too bad if one were to believe that a frame "is intended to represent a ’ stereotypical situation’" ( [24], p. 48). We...natural kind-like concepts - some form of definitional structuring is necessary. The internal structure of non atomic concepts (e.g., proximate genus ...types of beer, bottles of wine, etc.; <x> need not be any sort of Onatural genus .’ For example, in Dll the definite pronoun Othem" is not meant to I

  6. Automated Trait Extraction using ClearEarth, a Natural Language Processing System for Text Mining in Natural Sciences

    OpenAIRE

    Thessen,Anne; Preciado,Jenette; Jain,Payoj; Martin,James; Palmer,Martha; Bhat,Riyaz

    2018-01-01

    The cTAKES package (using the ClearTK Natural Language Processing toolkit Bethard et al. 2014, http://cleartk.github.io/cleartk/) has been successfully used to automatically read clinical notes in the medical field (Albright et al. 2013, Styler et al. 2014). It is used on a daily basis to automatically process clinical notes and extract relevant information by dozens of medical institutions. ClearEarth is a collaborative project that brings together computational linguistics and domain scient...

  7. Incidence Rate of Canonical vs. Derived Medical Terminology in Natural Language.

    Science.gov (United States)

    Topac, Vasile; Jurcau, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2015-01-01

    Medical terminology appears in the natural language in multiple forms: canonical, derived or inflected form. This research presents an analysis of the form in which medical terminology appears in Romanian and English language. The sources of medical language used for the study are web pages presenting medical information for patients and other lay users. The results show that, in English, medical terminology tends to appear more in canonical form while, in the case of Romanian, it is the opposite. This paper also presents the service that was created to perform this analysis. This tool is available for the general public, and it is designed to be easily extensible, allowing the addition of other languages.

  8. Effect of Language of Interview on the Validity and Reliability of Psychological Well-Being Scales.

    Science.gov (United States)

    Tran, Thanh V.; Williams, Leon F.

    1994-01-01

    Tested hypothesis that use of different languages in telephone survey could adversely affect cross-cultural comparability of standardized research measures. Of 2,299 persons surveyed in 1988 National Survey of Hispanic Elderly People, 86.6% were interviewed in Spanish and 13.4% were interviewed in English. Factor structure associated with positive…

  9. A Natural Language for AdS/CFT Correlators

    Energy Technology Data Exchange (ETDEWEB)

    Fitzpatrick, A.Liam; /Boston U.; Kaplan, Jared; /SLAC; Penedones, Joao; /Perimeter Inst. Theor. Phys.; Raju, Suvrat; /Harish-Chandra Res. Inst.; van Rees, Balt C.; /YITP, Stony Brook

    2012-02-14

    We provide dramatic evidence that 'Mellin space' is the natural home for correlation functions in CFTs with weakly coupled bulk duals. In Mellin space, CFT correlators have poles corresponding to an OPE decomposition into 'left' and 'right' sub-correlators, in direct analogy with the factorization channels of scattering amplitudes. In the regime where these correlators can be computed by tree level Witten diagrams in AdS, we derive an explicit formula for the residues of Mellin amplitudes at the corresponding factorization poles, and we use the conformal Casimir to show that these amplitudes obey algebraic finite difference equations. By analyzing the recursive structure of our factorization formula we obtain simple diagrammatic rules for the construction of Mellin amplitudes corresponding to tree-level Witten diagrams in any bulk scalar theory. We prove the diagrammatic rules using our finite difference equations. Finally, we show that our factorization formula and our diagrammatic rules morph into the flat space S-Matrix of the bulk theory, reproducing the usual Feynman rules, when we take the flat space limit of AdS/CFT. Throughout we emphasize a deep analogy with the properties of flat space scattering amplitudes in momentum space, which suggests that the Mellin amplitude may provide a holographic definition of the flat space S-Matrix.

  10. Identification of methicillin-resistant Staphylococcus aureus within the Nation’s Veterans Affairs Medical Centers using natural language processing

    Directory of Open Access Journals (Sweden)

    Jones Makoto

    2012-07-01

    Full Text Available Abstract Background Accurate information is needed to direct healthcare systems’ efforts to control methicillin-resistant Staphylococcus aureus (MRSA. Assembling complete and correct microbiology data is vital to understanding and addressing the multiple drug-resistant organisms in our hospitals. Methods Herein, we describe a system that securely gathers microbiology data from the Department of Veterans Affairs (VA network of databases. Using natural language processing methods, we applied an information extraction process to extract organisms and susceptibilities from the free-text data. We then validated the extraction against independently derived electronic data and expert annotation. Results We estimate that the collected microbiology data are 98.5% complete and that methicillin-resistant Staphylococcus aureus was extracted accurately 99.7% of the time. Conclusions Applying natural language processing methods to microbiology records appears to be a promising way to extract accurate and useful nosocomial pathogen surveillance data. Both scientific inquiry and the data’s reliability will be dependent on the surveillance system’s capability to compare from multiple sources and circumvent systematic error. The dataset constructed and methods used for this investigation could contribute to a comprehensive infectious disease surveillance system or other pressing needs.

  11. Reconceptualizing the Nature of Goals and Outcomes in Language/s Education

    Science.gov (United States)

    Leung, Constant; Scarino, Angela

    2016-01-01

    Transformations associated with the increasing speed, scale, and complexity of mobilities, together with the information technology revolution, have changed the demography of most countries of the world and brought about accompanying social, cultural, and economic shifts (Heugh, 2013). This complex diversity has changed the very nature of…

  12. Research and Development in Natural Language Understanding as Part of the Strategic Computing Program.

    Science.gov (United States)

    1987-04-01

    facilities. BBN is developing a series of increasingly sophisticated natural language understanding systems which will serve as an integrated interface...Haas, A.R. A Syntactic Theory of Belief and Action. Artificial Intelligence. 1986. Forthcoming. [6] Hinrichs, E. Temporale Anaphora im Englischen

  13. AutoTutor and Family: A Review of 17 Years of Natural Language Tutoring

    Science.gov (United States)

    Nye, Benjamin D.; Graesser, Arthur C.; Hu, Xiangen

    2014-01-01

    AutoTutor is a natural language tutoring system that has produced learning gains across multiple domains (e.g., computer literacy, physics, critical thinking). In this paper, we review the development, key research findings, and systems that have evolved from AutoTutor. First, the rationale for developing AutoTutor is outlined and the advantages…

  14. Speech perception and reading: two parallel modes of understanding language and implications for acquiring literacy naturally.

    Science.gov (United States)

    Massaro, Dominic W

    2012-01-01

    I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.

  15. The application of natural language processing to augmentative and alternative communication.

    Science.gov (United States)

    Higginbotham, D Jeffery; Lesher, Gregory W; Moulton, Bryan J; Roark, Brian

    2011-01-01

    Significant progress has been made in the application of natural language processing (NLP) to augmentative and alternative communication (AAC), particularly in the areas of interface design and word prediction. This article will survey the current state-of-the-science of NLP in AAC and discuss its future applications for the development of next generation of AAC technology.

  16. Preface to Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009)

    NARCIS (Netherlands)

    Krahmer, E.; Krahmer, E.; Theune, Mariet

    We are pleased to present the Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009). ENLG 2009 was held in Athens, Greece, as a workshop at the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2009). Following our call, we

  17. On the Thematic Nature of the Subjunctive in the Romance Languages.

    Science.gov (United States)

    Gerzymisch-Arbogast, Heidrun

    1993-01-01

    A theoretical discussion is offered on whether the subjunctive in the Romance languages is by nature thematic, as suggested in previous studies. English and Spanish samples are used to test the hypothesis; one conclusion is that the subjunctive seems to offer speaker-related information and may express the intensity of the speaker's involvement.…

  18. Training Parents to Use the Natural Language Paradigm to Increase Their Autistic Children's Speech.

    Science.gov (United States)

    Laski, Karen E.; And Others

    1988-01-01

    Parents of four nonverbal and four echolalic autistic children, aged five-nine, were trained to increase their children's speech by using the Natural Language Paradigm. Following training, parents increased the frequency with which they required their children to speak, and children increased the frequency of their verbalizations in three…

  19. Modelling the phonotactic structure of natural language words with simple recurrent networks

    NARCIS (Netherlands)

    Stoianov, [No Value; Nerbonne, J; Bouma, H; Coppen, PA; vanHalteren, H; Teunissen, L

    1998-01-01

    Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural language. Phonotactics concerns the order of symbols in words. We continued an earlier unsuccessful trial to model the phonotactics of Dutch words with SRNs. In order to overcome the previously reported

  20. The International English Language Testing System (IELTS): Its Nature and Development.

    Science.gov (United States)

    Ingram, D. E.

    The nature and development of the recently released International English Language Testing System (IELTS) instrument are described. The test is the result of a joint Australian-British project to develop a new test for use with foreign students planning to study in English-speaking countries. It is expected that the modular instrument will become…

  1. A Qualitative Analysis Framework Using Natural Language Processing and Graph Theory

    Science.gov (United States)

    Tierney, Patrick J.

    2012-01-01

    This paper introduces a method of extending natural language-based processing of qualitative data analysis with the use of a very quantitative tool--graph theory. It is not an attempt to convert qualitative research to a positivist approach with a mathematical black box, nor is it a "graphical solution". Rather, it is a method to help qualitative…

  2. Combining Machine Learning and Natural Language Processing to Assess Literary Text Comprehension

    Science.gov (United States)

    Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S.

    2017-01-01

    This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…

  3. Drawing Dynamic Geometry Figures Online with Natural Language for Junior High School Geometry

    Science.gov (United States)

    Wong, Wing-Kwong; Yin, Sheng-Kai; Yang, Chang-Zhe

    2012-01-01

    This paper presents a tool for drawing dynamic geometric figures by understanding the texts of geometry problems. With the tool, teachers and students can construct dynamic geometric figures on a web page by inputting a geometry problem in natural language. First we need to build the knowledge base for understanding geometry problems. With the…

  4. Construct Validity in TOEFL iBT Speaking Tasks: Insights from Natural Language Processing

    Science.gov (United States)

    Kyle, Kristopher; Crossley, Scott A.; McNamara, Danielle S.

    2016-01-01

    This study explores the construct validity of speaking tasks included in the TOEFL iBT (e.g., integrated and independent speaking tasks). Specifically, advanced natural language processing (NLP) tools, MANOVA difference statistics, and discriminant function analyses (DFA) are used to assess the degree to which and in what ways responses to these…

  5. Natural-gas-powered thermoelectricity as a reliability factor in the Brazilian electric sector

    International Nuclear Information System (INIS)

    Fernandes, E.; Oliveira, J.C.S. de; de Oliveira, P.R.; Alonso, P.S.R.

    2008-01-01

    The introduction of natural-gas-powered thermoelectricity into the Brazilian generation sector can be considered as a very complex energy, economic, regulatory and institutional revision. Brazil is a country with very specific characteristics in electricity generation, as approximately 80% of the generating capacity is based on hydroelectricity, showing strong dependency on rain and management of water reservoirs. A low rate of investment in the Brazilian Electricity Industry in the period of 1995-2000, associated with periods of low rainfall, led to a dramatic lowering of the water stocks in the reservoirs. With this scenario and the growing supply of natural gas, both from within Brazil and imported, natural gas thermal electric plants became a good option to diversify the electrical supply system. In spite of the Brazilian Government's efforts to install such plants, the country was faced with severe electricity rationing in 2001. The objective of this work is to show the need to continue with the implementation of natural gas thermal electricity projects, in a manner that allows flexibility and guarantees greater working reliability for the entire Brazilian electricity sector. Taking into account the world trend towards renewable energy, the perspectives of usage of biofuels in the Brazilian Energy Matrix and in electrical energy generation are also analyzed. The very issue of electrical power efficiency in Brazil and its challenges and strategic proposals from the standpoint of Government Programs and results provided so far are presented. The technological constraints in order to put on stream the thermal electric plants are also analyzed. The article concludes with a positive perspective of the usage of natural gas as to be the third pillar in the Brazilian Energy Matrix for the years to come

  6. Spot market natural gas strategies and reliability dealing with changing industry conditions

    International Nuclear Information System (INIS)

    McClure, D.C.

    1992-01-01

    Many in the energy industry thought the natural gas buying game had finally settled down to a predictable pattern in the 90's. After a tumultuous decade in the 80's they are ready to turn their attention to new challenges such as electricity wheeling and cogeneration. They were wrong. There is plenty of change left in the natural gas industry for the rest of the century. This growth will dramatically increase the number of options available to Energy Buyers, giving them new flexibility to design programs that meet goals for cost reduction, supply reliability, and administrative effort. However, with new options also come the responsibility for choices. Energy Managers of well designed gas programs will make these choices after careful consideration of the pro's and con's of each option. In short, the effective Buyer will develop a natural gas strategy. This report will begin by reviewing the basic types of change occurring in the industry, and then discuss some of the varied strategy options available to the Energy Buyer

  7. Highly Reliable Organizations in the Onshore Natural Gas Sector: An Assessment of Current Practices, Regulatory Frameworks, and Select Case Studies

    Energy Technology Data Exchange (ETDEWEB)

    Logan, Jeffrey S. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Paranhos, Elizabeth [Energy Innovation Partners, Seattle, WA (United States); Kozak, Tracy G. [Energy Innovation Partners, Seattle, WA (United States); Boyd, William [Univ. of Colorado, Boulder, CO (United States)

    2017-07-31

    This study focuses on onshore natural gas operations and examines the extent to which oil and gas firms have embraced certain organizational characteristics that lead to 'high reliability' - understood here as strong safety and reliability records over extended periods of operation. The key questions that motivated this study include whether onshore oil and gas firms engaged in exploration and production (E&P) and midstream (i.e., natural gas transmission and storage) are implementing practices characteristic of high reliability organizations (HROs) and the extent to which any such practices are being driven by industry innovations and standards and/or regulatory requirements.

  8. The Short International Physical Activity Questionnaire: cross-cultural adaptation, validation and reliability of the Hausa language version in Nigeria.

    Science.gov (United States)

    Oyeyemi, Adewale L; Oyeyemi, Adetoyeje Y; Adegoke, Babatunde O; Oyetoke, Fatima O; Aliyu, Habeeb N; Aliyu, Salamatu U; Rufai, Adamu A

    2011-11-22

    Accurate assessment of physical activity is important in determining the risk for chronic diseases such as cardiovascular disease, stroke, type 2 diabetes, cancer and obesity. The absence of culturally relevant measures in indigenous languages could pose challenges to epidemiological studies on physical activity in developing countries. The purpose of this study was to translate and cross-culturally adapt the Short International Physical Activity Questionnaire (IPAQ-SF) to the Hausa language, and to evaluate the validity and reliability of the Hausa version of IPAQ-SF in Nigeria. The English IPAQ-SF was translated into the Hausa language, synthesized, back translated, and subsequently subjected to expert committee review and pre-testing. The final product (Hausa IPAQ-SF) was tested in a cross-sectional study for concurrent (correlation with the English version) and construct validity, and test-retest reliability in a sample of 102 apparently healthy adults. The Hausa IPAQ-SF has good concurrent validity with Spearman correlation coefficients (ρ) ranging from 0.78 for vigorous activity (Min Week-1) to 0.92 for total physical activity (Metabolic Equivalent of Task [MET]-Min Week-1), but poor construct validity, with cardiorespiratory fitness (ρ = 0.21, p = 0.01) and body mass index (ρ = 0.22, p = 0.04) significantly correlated with only moderate activity and sitting time (Min Week-1), respectively. Reliability was good for vigorous (ICC = 0.73, 95% C.I = 0.55-0.84) and total physical activity (ICC = 0.61, 95% C.I = 0.47-0.72), but fair for moderate activity (ICC = 0.33, 95% C.I = 0.12-0.51), and few meaningful differences were found in the gender and socioeconomic status specific analyses. The Hausa IPAQ-SF has acceptable concurrent validity and test-retest reliability for vigorous-intensity activity, walking, sitting and total physical activity, but demonstrated only fair construct validity for moderate and sitting activities. The Hausa IPAQ-SF can be used for

  9. Designing a reliable leak bio-detection system for natural gas pipelines

    International Nuclear Information System (INIS)

    Batzias, F.A.; Siontorou, C.G.; Spanidis, P.-M.P.

    2011-01-01

    Monitoring of natural gas (NG) pipelines is an important task for economical/safety operation, loss prevention and environmental protection. Timely and reliable leak detection of gas pipeline, therefore, plays a key role in the overall integrity management for the pipeline system. Owing to the various limitations of the currently available techniques and the surveillance area that needs to be covered, the research on new detector systems is still thriving. Biosensors are worldwide considered as a niche technology in the environmental market, since they afford the desired detector capabilities at low cost, provided they have been properly designed/developed and rationally placed/networked/maintained by the aid of operational research techniques. This paper addresses NG leakage surveillance through a robust cooperative/synergistic scheme between biosensors and conventional detector systems; the network is validated in situ and optimized in order to provide reliable information at the required granularity level. The proposed scheme is substantiated through a knowledge based approach and relies on Fuzzy Multicriteria Analysis (FMCA), for selecting the best biosensor design that suits both, the target analyte and the operational micro-environment. This approach is illustrated in the design of leak surveying over a pipeline network in Greece.

  10. Designing a reliable leak bio-detection system for natural gas pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Batzias, F.A., E-mail: fbatzi@unipi.gr [Univ. Piraeus, Dept. Industrial Management and Technology, Karaoli and Dimitriou 80, 18534 Piraeus (Greece); Siontorou, C.G., E-mail: csiontor@unipi.gr [Univ. Piraeus, Dept. Industrial Management and Technology, Karaoli and Dimitriou 80, 18534 Piraeus (Greece); Spanidis, P.-M.P., E-mail: pspani@asprofos.gr [Asprofos Engineering S.A, El. Venizelos 284, 17675 Kallithea (Greece)

    2011-02-15

    Monitoring of natural gas (NG) pipelines is an important task for economical/safety operation, loss prevention and environmental protection. Timely and reliable leak detection of gas pipeline, therefore, plays a key role in the overall integrity management for the pipeline system. Owing to the various limitations of the currently available techniques and the surveillance area that needs to be covered, the research on new detector systems is still thriving. Biosensors are worldwide considered as a niche technology in the environmental market, since they afford the desired detector capabilities at low cost, provided they have been properly designed/developed and rationally placed/networked/maintained by the aid of operational research techniques. This paper addresses NG leakage surveillance through a robust cooperative/synergistic scheme between biosensors and conventional detector systems; the network is validated in situ and optimized in order to provide reliable information at the required granularity level. The proposed scheme is substantiated through a knowledge based approach and relies on Fuzzy Multicriteria Analysis (FMCA), for selecting the best biosensor design that suits both, the target analyte and the operational micro-environment. This approach is illustrated in the design of leak surveying over a pipeline network in Greece.

  11. Designing a reliable leak bio-detection system for natural gas pipelines.

    Science.gov (United States)

    Batzias, F A; Siontorou, C G; Spanidis, P-M P

    2011-02-15

    Monitoring of natural gas (NG) pipelines is an important task for economical/safety operation, loss prevention and environmental protection. Timely and reliable leak detection of gas pipeline, therefore, plays a key role in the overall integrity management for the pipeline system. Owing to the various limitations of the currently available techniques and the surveillance area that needs to be covered, the research on new detector systems is still thriving. Biosensors are worldwide considered as a niche technology in the environmental market, since they afford the desired detector capabilities at low cost, provided they have been properly designed/developed and rationally placed/networked/maintained by the aid of operational research techniques. This paper addresses NG leakage surveillance through a robust cooperative/synergistic scheme between biosensors and conventional detector systems; the network is validated in situ and optimized in order to provide reliable information at the required granularity level. The proposed scheme is substantiated through a knowledge based approach and relies on Fuzzy Multicriteria Analysis (FMCA), for selecting the best biosensor design that suits both, the target analyte and the operational micro-environment. This approach is illustrated in the design of leak surveying over a pipeline network in Greece. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Unidimensionality and reliability under Mokken scaling of the Dutch language version of the SF-36

    NARCIS (Netherlands)

    Heijden, P.G.M. van der; Buuren, S. van; Fekkes, M.; Radder, J.; Verrips, E.

    2003-01-01

    The sub-scales of the SF-36 in the Dutch National Study are investigated with respect to unidimensionality and reliability. It is argued that these properties deserve separate treatment. For unidimensionality we use a non-parametric model from item response theory, called the Mokken scaling model,

  13. Adaptation of Organizational Justice in Sport Scale into Turkish Language: Validity and Reliability Study

    Science.gov (United States)

    Sayin, Ayfer; Sahin, Mustafa Yasar

    2017-01-01

    The present study aimed to provide a Turkish adaptation of the Organizational Justice in Sport Scale and perform reliability and validity studies. Answers provided by 260 participants who work as football, male basketball and female basketball coaches in National Collegiate Athletic Association (NCAA) were analysed using the original scale that…

  14. Toward a Common Language for Measuring Patient Mobility in the Hospital: Reliability and Construct Validity of Interprofessional Mobility Measures.

    Science.gov (United States)

    Hoyer, Erik H; Young, Daniel L; Klein, Lisa M; Kreif, Julie; Shumock, Kara; Hiser, Stephanie; Friedman, Michael; Lavezza, Annette; Jette, Alan; Chan, Kitty S; Needham, Dale M

    2018-02-01

    The lack of common language among interprofessional inpatient clinical teams is an important barrier to achieving inpatient mobilization. In The Johns Hopkins Hospital, the Activity Measure for Post-Acute Care (AM-PAC) Inpatient Mobility Short Form (IMSF), also called "6-Clicks," and the Johns Hopkins Highest Level of Mobility (JH-HLM) are part of routine clinical practice. The measurement characteristics of these tools when used by both nurses and physical therapists for interprofessional communication or assessment are unknown. The purposes of this study were to evaluate the reliability and minimal detectable change of AM-PAC IMSF and JH-HLM when completed by nurses and physical therapists and to evaluate the construct validity of both measures when used by nurses. A prospective evaluation of a convenience sample was used. The test-retest reliability and the interrater reliability of AM-PAC IMSF and JH-HLM for inpatients in the neuroscience department (n = 118) of an academic medical center were evaluated. Each participant was independently scored twice by a team of 2 nurses and 1 physical therapist; a total of 4 physical therapists and 8 nurses participated in reliability testing. In a separate inpatient study protocol (n = 69), construct validity was evaluated via an assessment of convergent validity with other measures of function (grip strength, Katz Activities of Daily Living Scale, 2-minute walk test, 5-times sit-to-stand test) used by 5 nurses. The test-retest reliability values (intraclass correlation coefficients) for physical therapists and nurses were 0.91 and 0.97, respectively, for AM-PAC IMSF and 0.94 and 0.95, respectively, for JH-HLM. The interrater reliability values (intraclass correlation coefficients) between physical therapists and nurses were 0.96 for AM-PAC IMSF and 0.99 for JH-HLM. Construct validity (Spearman correlations) ranged from 0.25 between JH-HLM and right-hand grip strength to 0.80 between AM-PAC IMSF and the Katz Activities of

  15. Language related differences of the sustained response evoked by natural speech sounds.

    Directory of Open Access Journals (Sweden)

    Christina Siu-Dschu Fan

    Full Text Available In tonal languages, such as Mandarin Chinese, the pitch contour of vowels discriminates lexical meaning, which is not the case in non-tonal languages such as German. Recent data provide evidence that pitch processing is influenced by language experience. However, there are still many open questions concerning the representation of such phonological and language-related differences at the level of the auditory cortex (AC. Using magnetoencephalography (MEG, we recorded transient and sustained auditory evoked fields (AEF in native Chinese and German speakers to investigate language related phonological and semantic aspects in the processing of acoustic stimuli. AEF were elicited by spoken meaningful and meaningless syllables, by vowels, and by a French horn tone. Speech sounds were recorded from a native speaker and showed frequency-modulations according to the pitch-contours of Mandarin. The sustained field (SF evoked by natural speech signals was significantly larger for Chinese than for German listeners. In contrast, the SF elicited by a horn tone was not significantly different between groups. Furthermore, the SF of Chinese subjects was larger when evoked by meaningful syllables compared to meaningless ones, but there was no significant difference regarding whether vowels were part of the Chinese phonological system or not. Moreover, the N100m gave subtle but clear evidence that for Chinese listeners other factors than purely physical properties play a role in processing meaningful signals. These findings show that the N100 and the SF generated in Heschl's gyrus are influenced by language experience, which suggests that AC activity related to specific pitch contours of vowels is influenced in a top-down fashion by higher, language related areas. Such interactions are in line with anatomical findings and neuroimaging data, as well as with the dual-stream model of language of Hickok and Poeppel that highlights the close and reciprocal interaction

  16. Language related differences of the sustained response evoked by natural speech sounds.

    Science.gov (United States)

    Fan, Christina Siu-Dschu; Zhu, Xingyu; Dosch, Hans Günter; von Stutterheim, Christiane; Rupp, André

    2017-01-01

    In tonal languages, such as Mandarin Chinese, the pitch contour of vowels discriminates lexical meaning, which is not the case in non-tonal languages such as German. Recent data provide evidence that pitch processing is influenced by language experience. However, there are still many open questions concerning the representation of such phonological and language-related differences at the level of the auditory cortex (AC). Using magnetoencephalography (MEG), we recorded transient and sustained auditory evoked fields (AEF) in native Chinese and German speakers to investigate language related phonological and semantic aspects in the processing of acoustic stimuli. AEF were elicited by spoken meaningful and meaningless syllables, by vowels, and by a French horn tone. Speech sounds were recorded from a native speaker and showed frequency-modulations according to the pitch-contours of Mandarin. The sustained field (SF) evoked by natural speech signals was significantly larger for Chinese than for German listeners. In contrast, the SF elicited by a horn tone was not significantly different between groups. Furthermore, the SF of Chinese subjects was larger when evoked by meaningful syllables compared to meaningless ones, but there was no significant difference regarding whether vowels were part of the Chinese phonological system or not. Moreover, the N100m gave subtle but clear evidence that for Chinese listeners other factors than purely physical properties play a role in processing meaningful signals. These findings show that the N100 and the SF generated in Heschl's gyrus are influenced by language experience, which suggests that AC activity related to specific pitch contours of vowels is influenced in a top-down fashion by higher, language related areas. Such interactions are in line with anatomical findings and neuroimaging data, as well as with the dual-stream model of language of Hickok and Poeppel that highlights the close and reciprocal interaction between

  17. Mathematics and the Laws of Nature Developing the Language of Science (Revised Edition)

    CERN Document Server

    Tabak, John

    2011-01-01

    Mathematics and the Laws of Nature, Revised Edition describes the evolution of the idea that nature can be described in the language of mathematics. Colorful chapters explore the earliest attempts to apply deductive methods to the study of the natural world. This revised resource goes on to examine the development of classical conservation laws, including the conservation of momentum, the conservation of mass, and the conservation of energy. Chapters have been updated and revised to reflect recent information, including the mathematical pioneers who introduced new ideas about what it meant to

  18. Language

    DEFF Research Database (Denmark)

    Sanden, Guro Refsum

    2016-01-01

    Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...... of a company. Language policies and/or strategies can be used to regulate a company’s internal modes of communication. Language management tools can be deployed to address existing and expected language needs. Continuous feedback from the front line ensures strategic learning and reduces the risk of suboptimal...

  19. The Nature of the Language Faculty and Its Implications for Evolution of Language (Reply to Fitch, Hauser, and Chomsky)

    Science.gov (United States)

    Jackendoff, Ray; Pinker, Steven

    2005-01-01

    In a continuation of the conversation with Fitch, Chomsky, and Hauser on the evolution of language, we examine their defense of the claim that the uniquely human, language-specific part of the language faculty (the ''narrow language faculty'') consists only of recursion, and that this part cannot be considered an adaptation to communication. We…

  20. Reliability and validity of the CogState battery Chinese language version in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Na Zhong

    Full Text Available BACKGROUND: Cognitive impairment in patients with schizophrenia is a core symptom of this disease. The computerized CogState Battery (CSB has been used to detect seven of the most common cognitive domains in schizophrenia. The aim of this study was to examine the reliability and validity of the Chinese version of the CSB (CSB-C, in Chinese patients with schizophrenia. METHODOLOGY/PRINCIPAL FINDINGS: Sixty Chinese patients with schizophrenia and 58 age, sex, and education matched healthy controls were enrolled. All subjects completed the CSB-C and the Repeated Battery for the Assessment of Neuropsychological Status (RBANS. To examine the test-retest reliability of CSB-C, we tested 33 healthy controls twice, at a one month interval. The Cronbach α value of CSB-C in patients was 0.81. The test-retest correlation coefficients of the Two Back Task, Gronton Maze Learning Task, Social Emotional Cognition Task, and Continuous Paired Association Learning Task were between 0.39 and 0.62 (p<0.01 in healthy controls. The composite scores and all subscores for the CSB-C in patients were significantly (p<0.01 lower than those of healthy controls. Furthermore, composite scores for patients on the RBANS were also significantly lower than those of healthy controls. Interestingly, there was a positive correlation (r = 0.544, p<0.001 between the composite scores on CSB-C and RBANS for patients. Additionally, in the attention and memory cognitive domains, corresponding subsets from the two batteries correlated significantly (p<0.05. Moreover, factor analysis showed a two-factor model, consisting of speed, memory and reasoning. CONCLUSIONS/SIGNIFICANCE: The CSB-C shows good reliability and validity in measuring the broad cognitive domains of schizophrenia in affected Chinese patients. Therefore, the CSB-C can be used as a cognitive battery, to assess the therapeutic effects of potential cognitive-enhancing agents in this cohort.

  1. A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language.

    Directory of Open Access Journals (Sweden)

    Bruno Golosio

    Full Text Available Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.

  2. Coupling ontology driven semantic representation with multilingual natural language generation for tuning international terminologies.

    Science.gov (United States)

    Rassinoux, Anne-Marie; Baud, Robert H; Rodrigues, Jean-Marie; Lovis, Christian; Geissbühler, Antoine

    2007-01-01

    The importance of clinical communication between providers, consumers and others, as well as the requisite for computer interoperability, strengthens the need for sharing common accepted terminologies. Under the directives of the World Health Organization (WHO), an approach is currently being conducted in Australia to adopt a standardized terminology for medical procedures that is intended to become an international reference. In order to achieve such a standard, a collaborative approach is adopted, in line with the successful experiment conducted for the development of the new French coding system CCAM. Different coding centres are involved in setting up a semantic representation of each term using a formal ontological structure expressed through a logic-based representation language. From this language-independent representation, multilingual natural language generation (NLG) is performed to produce noun phrases in various languages that are further compared for consistency with the original terms. Outcomes are presented for the assessment of the International Classification of Health Interventions (ICHI) and its translation into Portuguese. The initial results clearly emphasize the feasibility and cost-effectiveness of the proposed method for handling both a different classification and an additional language. NLG tools, based on ontology driven semantic representation, facilitate the discovery of ambiguous and inconsistent terms, and, as such, should be promoted for establishing coherent international terminologies.

  3. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  4. Harnessing Biomedical Natural Language Processing Tools to Identify Medicinal Plant Knowledge from Historical Texts.

    Science.gov (United States)

    Sharma, Vivekanand; Law, Wayne; Balick, Michael J; Sarkar, Indra Neil

    2017-01-01

    The growing amount of data describing historical medicinal uses of plants from digitization efforts provides the opportunity to develop systematic approaches for identifying potential plant-based therapies. However, the task of cataloguing plant use information from natural language text is a challenging task for ethnobotanists. To date, there have been only limited adoption of informatics approaches used for supporting the identification of ethnobotanical information associated with medicinal uses. This study explored the feasibility of using biomedical terminologies and natural language processing approaches for extracting relevant plant-associated therapeutic use information from historical biodiversity literature collection available from the Biodiversity Heritage Library. The results from this preliminary study suggest that there is potential utility of informatics methods to identify medicinal plant knowledge from digitized resources as well as highlight opportunities for improvement.

  5. Using Open Geographic Data to Generate Natural Language Descriptions for Hydrological Sensor Networks.

    Science.gov (United States)

    Molina, Martin; Sanchez-Soriano, Javier; Corcho, Oscar

    2015-07-03

    Providing descriptions of isolated sensors and sensor networks in natural language, understandable by the general public, is useful to help users find relevant sensors and analyze sensor data. In this paper, we discuss the feasibility of using geographic knowledge from public databases available on the Web (such as OpenStreetMap, Geonames, or DBpedia) to automatically construct such descriptions. We present a general method that uses such information to generate sensor descriptions in natural language. The results of the evaluation of our method in a hydrologic national sensor network showed that this approach is feasible and capable of generating adequate sensor descriptions with a lower development effort compared to other approaches. In the paper we also analyze certain problems that we found in public databases (e.g., heterogeneity, non-standard use of labels, or rigid search methods) and their impact in the generation of sensor descriptions.

  6. An ontology model for nursing narratives with natural language generation technology.

    Science.gov (United States)

    Min, Yul Ha; Park, Hyeoun-Ae; Jeon, Eunjoo; Lee, Joo Yun; Jo, Soo Jung

    2013-01-01

    The purpose of this study was to develop an ontology model to generate nursing narratives as natural as human language from the entity-attribute-value triplets of a detailed clinical model using natural language generation technology. The model was based on the types of information and documentation time of the information along the nursing process. The typesof information are data characterizing the patient status, inferences made by the nurse from the patient data, and nursing actions selected by the nurse to change the patient status. This information was linked to the nursing process based on the time of documentation. We describe a case study illustrating the application of this model in an acute-care setting. The proposed model provides a strategy for designing an electronic nursing record system.

  7. BT-Nurse: computer generation of natural language shift summaries from complex heterogeneous medical data.

    Science.gov (United States)

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy; Westwater, Dave

    2011-01-01

    The BT-Nurse system uses data-to-text technology to automatically generate a natural language nursing shift summary in a neonatal intensive care unit (NICU). The summary is solely based on data held in an electronic patient record system, no additional data-entry is required. BT-Nurse was tested for two months in the Royal Infirmary of Edinburgh NICU. Nurses were asked to rate the understandability, accuracy, and helpfulness of the computer-generated summaries; they were also asked for free-text comments about the summaries. The nurses found the majority of the summaries to be understandable, accurate, and helpful (pgenerated summaries. In conclusion, natural language NICU shift summaries can be automatically generated from an electronic patient record, but our proof-of-concept software needs considerable additional development work before it can be deployed.

  8. Using Open Geographic Data to Generate Natural Language Descriptions for Hydrological Sensor Networks

    Directory of Open Access Journals (Sweden)

    Martin Molina

    2015-07-01

    Full Text Available Providing descriptions of isolated sensors and sensor networks in natural language, understandable by the general public, is useful to help users find relevant sensors and analyze sensor data. In this paper, we discuss the feasibility of using geographic knowledge from public databases available on the Web (such as OpenStreetMap, Geonames, or DBpedia to automatically construct such descriptions. We present a general method that uses such information to generate sensor descriptions in natural language. The results of the evaluation of our method in a hydrologic national sensor network showed that this approach is feasible and capable of generating adequate sensor descriptions with a lower development effort compared to other approaches. In the paper we also analyze certain problems that we found in public databases (e.g., heterogeneity, non-standard use of labels, or rigid search methods and their impact in the generation of sensor descriptions.

  9. Natural language processing-based COTS software and related technologies survey.

    Energy Technology Data Exchange (ETDEWEB)

    Stickland, Michael G.; Conrad, Gregory N.; Eaton, Shelley M.

    2003-09-01

    Natural language processing-based knowledge management software, traditionally developed for security organizations, is now becoming commercially available. An informal survey was conducted to discover and examine current NLP and related technologies and potential applications for information retrieval, information extraction, summarization, categorization, terminology management, link analysis, and visualization for possible implementation at Sandia National Laboratories. This report documents our current understanding of the technologies, lists software vendors and their products, and identifies potential applications of these technologies.

  10. Medical subdomain classification of clinical notes using a machine learning-based natural language processing approach

    OpenAIRE

    Weng, Wei-Hung; Wagholikar, Kavishwar B.; McCray, Alexa T.; Szolovits, Peter; Chueh, Henry C.

    2017-01-01

    Background The medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note. Methods We constructed the pipeline using the clinical ...

  11. Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language

    Science.gov (United States)

    2016-09-06

    conversational agent with information exchange disabled until the end of the experiment run. The meaning of the indicator in the top- right of the agent... Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language Alun Preece∗, William...email: PreeceAD@cardiff.ac.uk †Emerging Technology Services, IBM United Kingdom Ltd, Hursley Park, Winchester, UK ‡US Army Research Laboratory, Human

  12. Laboratory process control using natural language commands from a personal computer

    Science.gov (United States)

    Will, Herbert A.; Mackin, Michael A.

    1989-01-01

    PC software is described which provides flexible natural language process control capability with an IBM PC or compatible machine. Hardware requirements include the PC, and suitable hardware interfaces to all controlled devices. Software required includes the Microsoft Disk Operating System (MS-DOS) operating system, a PC-based FORTRAN-77 compiler, and user-written device drivers. Instructions for use of the software are given as well as a description of an application of the system.

  13. A reliability analysis of a natural-gas pressure-regulating installation

    Energy Technology Data Exchange (ETDEWEB)

    Gerbec, Marko, E-mail: marko.gerbec@ijs.s [Jozef Stefan Institute, Jamova 39, 1000 Ljubljana (Slovenia)

    2010-11-15

    A case study involving analyses of the operability, reliability and availability was made for a selected, typical, high-pressure, natural-gas, pressure-regulating installation (PRI). The study was commissioned by the national operator of the natural-gas, transmission-pipeline network for the purpose of validating the existing operability and maintenance practices and policies. The study involved a failure-risk analysis (HAZOP) of the selected typical installation, retrieval and analysis of the available corrective maintenance data for the PRI's equipment at the network level in order to obtain the failure rates followed by an elaboration of the quantitative fault trees. Thus, both operator-specific and generic literature data on equipment failure rates were used. The results obtained show that two failure scenarios need to be considered: the first is related to the PRI's failure to provide gas to the consumer(s) due to a low-pressure state and the second is related to a failure of the gas pre-heating at the high-pressure reduction stage, leading to a low temperature (a non-critical, but unfavorable, PRI state). Related to the first scenario, the most important cause of failure was found to be a transient pressure disturbance back from the consumer side. The network's average PRI failure frequency was assessed to be about once per 32 years, and the average unavailability to be about 4 minutes per year (the confidence intervals were also assessed). Based on the results obtained, some improvements to the monitoring of the PRI are proposed.

  14. A reliability analysis of a natural-gas pressure-regulating installation

    International Nuclear Information System (INIS)

    Gerbec, Marko

    2010-01-01

    A case study involving analyses of the operability, reliability and availability was made for a selected, typical, high-pressure, natural-gas, pressure-regulating installation (PRI). The study was commissioned by the national operator of the natural-gas, transmission-pipeline network for the purpose of validating the existing operability and maintenance practices and policies. The study involved a failure-risk analysis (HAZOP) of the selected typical installation, retrieval and analysis of the available corrective maintenance data for the PRI's equipment at the network level in order to obtain the failure rates followed by an elaboration of the quantitative fault trees. Thus, both operator-specific and generic literature data on equipment failure rates were used. The results obtained show that two failure scenarios need to be considered: the first is related to the PRI's failure to provide gas to the consumer(s) due to a low-pressure state and the second is related to a failure of the gas pre-heating at the high-pressure reduction stage, leading to a low temperature (a non-critical, but unfavorable, PRI state). Related to the first scenario, the most important cause of failure was found to be a transient pressure disturbance back from the consumer side. The network's average PRI failure frequency was assessed to be about once per 32 years, and the average unavailability to be about 4 minutes per year (the confidence intervals were also assessed). Based on the results obtained, some improvements to the monitoring of the PRI are proposed.

  15. Quantization, Frobenius and Bi algebras from the Categorical Framework of Quantum Mechanics to Natural Language Semantics

    Science.gov (United States)

    Sadrzadeh, Mehrnoosh

    2017-07-01

    Compact Closed categories and Frobenius and Bi algebras have been applied to model and reason about Quantum protocols. The same constructions have also been applied to reason about natural language semantics under the name: ``categorical distributional compositional'' semantics, or in short, the ``DisCoCat'' model. This model combines the statistical vector models of word meaning with the compositional models of grammatical structure. It has been applied to natural language tasks such as disambiguation, paraphrasing and entailment of phrases and sentences. The passage from the grammatical structure to vectors is provided by a functor, similar to the Quantization functor of Quantum Field Theory. The original DisCoCat model only used compact closed categories. Later, Frobenius algebras were added to it to model long distance dependancies such as relative pronouns. Recently, bialgebras have been added to the pack to reason about quantifiers. This paper reviews these constructions and their application to natural language semantics. We go over the theory and present some of the core experimental results.

  16. Quantization, Frobenius and Bi Algebras from the Categorical Framework of Quantum Mechanics to Natural Language Semantics

    Directory of Open Access Journals (Sweden)

    Mehrnoosh Sadrzadeh

    2017-07-01

    Full Text Available Compact Closed categories and Frobenius and Bi algebras have been applied to model and reason about Quantum protocols. The same constructions have also been applied to reason about natural language semantics under the name: “categorical distributional compositional” semantics, or in short, the “DisCoCat” model. This model combines the statistical vector models of word meaning with the compositional models of grammatical structure. It has been applied to natural language tasks such as disambiguation, paraphrasing and entailment of phrases and sentences. The passage from the grammatical structure to vectors is provided by a functor, similar to the Quantization functor of Quantum Field Theory. The original DisCoCat model only used compact closed categories. Later, Frobenius algebras were added to it to model long distance dependancies such as relative pronouns. Recently, bialgebras have been added to the pack to reason about quantifiers. This paper reviews these constructions and their application to natural language semantics. We go over the theory and present some of the core experimental results.

  17. The Modified Low Back Pain Disability Questionnaire: Reliability, Validity, and Responsiveness of a Dutch Language Version.

    Science.gov (United States)

    Denteneer, Lenie; Van Daele, Ulrike; Truijen, Steven; De Hertogh, Willem; Meirte, Jill; Deckers, Kristiaan; Stassijns, Gaetane

    2018-03-01

    Cross-sectional study. The goal of this study is to translate the English version of the Modified Low Back Pain Disability Questionnaire (MDQ) into a Dutch version and investigate its clinimetric properties for patients with nonspecific chronic low back pain (CLBP). Fritz et al (2001) developed a modified version of the Oswestry Disability Questionnaire (ODI) to assess functional status and named it the MDQ. In this version, a question regarding employment and homemaking ability was substituted for the question related to sex life. Good clinimetric properties for the MDQ were identified but up until now it is not clear whether the clinimetric properties of the MDQ would change if it was translated into a Dutch version. Translation of the MDQ into Dutch was done in 4 steps. Test-retest reliability was investigated using the intraclass correlation coefficient (ICC) model. Validity was calculated using Pearson correlations and a 2-way analysis of variance for repeated measures. Finally, responsiveness was calculated with the area under the curve (AUC), minimal detectable change (MDC), and the standardized response mean (SRM). A total of 80 completed questionnaires were collected in 3 different hospitals and a total of 43 patients finished a 9 weeks intervention period, completing the retest. Test-retest reliability was excellent with an ICC of 0.89 (95% confidence interval [CI], 0.74-0.95). To confirm the convergent validity, the MDQ answered all predefined hypothesises (r = -0.65-0.69/P = 0.01-0.00) and good results for construct validity were found (P = 0.02). The MDQ had an AUC of 0.64 (95% confidence interval [CI], 0.47-0.81), an MDC of 8.80 points, and a SRM of 0.65. The Dutch version of the MDQ shows good clinimetric properties and is shown to be usable in the assessment of the functional status of Dutch-speaking patients with nonspecific CLBP. 3.

  18. Research on the reliability of measurement of natural radioactive nuclide concentration of U-238

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Seok Ki; Kim, Gee Hyun [Dept. of Nuclear engineering, Univ. of SeJong, Seoul (Korea, Republic of); Joo, Sun Dong; Lee, Hoon [KoFONS, Seongnam (Korea, Republic of)

    2016-12-15

    Naturally occurred radioactive materials (NORM) can be found all around us and people are exposed to this no matter what they do or where they live. In this study, two indirect measurement methods of NORM U-238 has been reviewed; one that has used HPGe on the basis of the maintenance, and the other is disequilibrium theory of radioactive equilibrium relationships of mother and daughter nuclide at Decay-chain of NORM U-238. For this review, complicated pre-processing process (Breaking->Fusion->Chromatography->Electron deposit) has been used , and then carried out a comparative research with direct measurement method that makes use of and measures Alpha spectrometer. Through the experiment as above, we could infer the daughter nuclide whose radioactive equilibrium has been maintained with U-238. Therefore, we could find out that the daughter nuclide suitable to be applied to Gamma indirect measurement method was Th-234. Due to Pearson Correlation statistics, we could find out the reliability of the result value that has been analyzed by using Th-234.

  19. Language Revitalization.

    Science.gov (United States)

    Hinton, Leanne

    2003-01-01

    Surveys developments in language revitalization and language death. Focusing on indigenous languages, discusses the role and nature of appropriate linguistic documentation, possibilities for bilingual education, and methods of promoting oral fluency and intergenerational transmission in affected languages. (Author/VWL)

  20. Exploring culture, language and the perception of the nature of science

    Science.gov (United States)

    Sutherland, Dawn

    2002-01-01

    One dimension of early Canadian education is the attempt of the government to use the education system as an assimilative tool to integrate the First Nations and Me´tis people into Euro-Canadian society. Despite these attempts, many First Nations and Me´tis people retained their culture and their indigenous language. Few science educators have examined First Nations and Western scientific worldviews and the impact they may have on science learning. This study explored the views some First Nations (Cree) and Euro-Canadian Grade-7-level students in Manitoba had about the nature of science. Both qualitative (open-ended questions and interviews) and quantitative (a Likert-scale questionnaire) instruments were used to explore student views. A central hypothesis to this research programme is the possibility that the different world-views of two student populations, Cree and Euro-Canadian, are likely to influence their perceptions of science. This preliminary study explored a range of methodologies to probe the perceptions of the nature of science in these two student populations. It was found that the two cultural groups differed significantly between some of the tenets in a Nature of Scientific Knowledge Scale (NSKS). Cree students significantly differed from Euro-Canadian students on the developmental, testable and unified tenets of the nature of scientific knowledge scale. No significant differences were found in NSKS scores between language groups (Cree students who speak English in the home and those who speak English and Cree or Cree only). The differences found between language groups were primarily in the open-ended questions where preformulated responses were absent. Interviews about critical incidents provided more detailed accounts of the Cree students' perception of the nature of science. The implications of the findings of this study are discussed in relation to the challenges related to research methodology, further areas for investigation, science

  1. Zipf’s word frequency law in natural language: A critical review and future directions

    Science.gov (United States)

    2014-01-01

    The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf ’ s law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf’s law and are then used to evaluate many of the theoretical explanations of Zipf’s law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf’s law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data. PMID:24664880

  2. An intelligent tutoring system that generates a natural language dialogue using dynamic multi-level planning.

    Science.gov (United States)

    Woo, Chong Woo; Evens, Martha W; Freedman, Reva; Glass, Michael; Shim, Leem Seop; Zhang, Yuemei; Zhou, Yujian; Michael, Joel

    2006-09-01

    The objective of this research was to build an intelligent tutoring system capable of carrying on a natural language dialogue with a student who is solving a problem in physiology. Previous experiments have shown that students need practice in qualitative causal reasoning to internalize new knowledge and to apply it effectively and that they learn by putting their ideas into words. Analysis of a corpus of 75 hour-long tutoring sessions carried on in keyboard-to-keyboard style by two professors of physiology at Rush Medical College tutoring first-year medical students provided the rules used in tutoring strategies and tactics, parsing, and text generation. The system presents the student with a perturbation to the blood pressure, asks for qualitative predictions of the changes produced in seven important cardiovascular variables, and then launches a dialogue to correct any errors and to probe for possible misconceptions. The natural language understanding component uses a cascade of finite-state machines. The generation is based on lexical functional grammar. Results of experiments with pretests and posttests have shown that using the system for an hour produces significant learning gains and also that even this brief use improves the student's ability to solve problems more then reading textual material on the topic. Student surveys tell us that students like the system and feel that they learn from it. The system is now in regular use in the first-year physiology course at Rush Medical College. We conclude that the CIRCSIM-Tutor system demonstrates that intelligent tutoring systems can implement effective natural language dialogue with current language technology.

  3. Reliability and Validity of the Turkish Language Version of the Test of Performance Strategies

    Directory of Open Access Journals (Sweden)

    Miçooğulları Bülent Okan

    2017-03-01

    Full Text Available The aim of the present study was to examine the psychometric properties of the Test of Performance Strategies (TOPS; Thomas et al., 1999 on the Turkish population. The TOPS was designed to assess eight psychological skills and strategies used by athletes in competition (activation, automaticity, emotional control, goal-setting, imagery, relaxation, self-talk, and negative thinking and the same strategies, except negative thinking is replaced by attentional control used in training. The sample of the study included athletes who were training and competing in a wide variety of sports across a broad range of performance standards. The final sample consisted of 433 males (mean ± s: age 22.47 ± 5.30 years and 187 females (mean ± s: age 20.97 ± 4.78 years, 620 athletes in total (mean ± s: age 21.25 ± 4.87 years who voluntarily participated; TOPS was administered to all participants. Afterward, Confirmatory Factor Analysis (CFA was conducted by Analysis Moments of Structures (AMOS 18. Comparative fit index (CFI, non-normed fit index (NNFI and root mean square error of approximation (RMSEA were used to verify whether the model fit the data. Goodness-of-fit statistics were CFI= .91, NNFI= .92 and RMSEA= .056. These values showed that the tested model is coherent at a satisfactory level. Moreover, results of confirmatory factor analyses revealed that a total of four items (two items from competition and two from practice within the subscale of automaticity have been removed. The 28 items within the remaining seven subscales have been validated. In conclusion, Turkish version of TOPS is a valid and reliable instrument to assess the psychological skills and strategies used by athletes in competition and practices.

  4. Testing an AAC system that transforms pictograms into natural language with persons with cerebral palsy.

    Science.gov (United States)

    Pahisa-Solé, Joan; Herrera-Joancomartí, Jordi

    2017-10-18

    In this article, we describe a compansion system that transforms the telegraphic language that comes from the use of pictogram-based augmentative and alternative communication (AAC) into natural language. The system was tested with four participants with severe cerebral palsy and ranging degrees of linguistic competence and intellectual disabilities. Participants had used pictogram-based AAC at least for the past 30 years each and presented a stable linguistic profile. During tests, which consisted of a total of 40 sessions, participants were able to learn new linguistic skills, such as the use of basic verb tenses, while using the compansion system, which proved a source of motivation. The system can be adapted to the linguistic competence of each person and required no learning curve during tests when none of its special features, like gender, number, verb tense, or sentence type modifiers, were used. Furthermore, qualitative and quantitative results showed a mean communication rate increase of 41.59%, compared to the same communication device without the compansion system, and an overall improvement in the communication experience when the output is in natural language. Tests were conducted in Catalan and Spanish.

  5. Resolution of ambiguities in cartoons as an illustration of the role of pragmatics in natural language understanding by computers

    Energy Technology Data Exchange (ETDEWEB)

    Mazlack, L.J.; Paz, N.M.

    1983-01-01

    Newspaper cartoons can graphically display the result of ambiguity in human speech; the result can be unexpected and funny. Likewise, computer analysis of natural language statements also needs to successfully resolve ambiguous situations. Computer techniques already developed use restricted world knowledge in resolving ambiguous language use. This paper illustrates how these techniques can be used in resolving ambiguous situations arising in cartoons. 8 references.

  6. ONTOLOGY BASED MEANINGFUL SEARCH USING SEMANTIC WEB AND NATURAL LANGUAGE PROCESSING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    K. Palaniammal

    2013-10-01

    Full Text Available The semantic web extends the current World Wide Web by adding facilities for the machine understood description of meaning. The ontology based search model is used to enhance efficiency and accuracy of information retrieval. Ontology is the core technology for the semantic web and this mechanism for representing formal and shared domain descriptions. In this paper, we proposed ontology based meaningful search using semantic web and Natural Language Processing (NLP techniques in the educational domain. First we build the educational ontology then we present the semantic search system. The search model consisting three parts which are embedding spell-check, finding synonyms using WordNet API and querying ontology using SPARQL language. The results are both sensitive to spell check and synonymous context. This paper provides more accurate results and the complete details for the selected field in a single page.

  7. Reliability and validity of neurobehavioral function on the Psychology Experimental Building Language test battery in young adults

    Directory of Open Access Journals (Sweden)

    Brian J. Piper

    2015-12-01

    Full Text Available Background. The Psychology Experiment Building Language (PEBL software consists of over one-hundred computerized tests based on classic and novel cognitive neuropsychology and behavioral neurology measures. Although the PEBL tests are becoming more widely utilized, there is currently very limited information about the psychometric properties of these measures.Methods. Study I examined inter-relationships among nine PEBL tests including indices of motor-function (Pursuit Rotor and Dexterity, attention (Test of Attentional Vigilance and Time-Wall, working memory (Digit Span Forward, and executive-function (PEBL Trail Making Test, Berg/Wisconsin Card Sorting Test, Iowa Gambling Test, and Mental Rotation in a normative sample (N = 189, ages 18–22. Study II evaluated test–retest reliability with a two-week interest interval between administrations in a separate sample (N = 79, ages 18–22.Results. Moderate intra-test, but low inter-test, correlations were observed and ceiling/floor effects were uncommon. Sex differences were identified on the Pursuit Rotor (Cohen’s d = 0.89 and Mental Rotation (d = 0.31 tests. The correlation between the test and retest was high for tests of motor learning (Pursuit Rotor time on target r = .86 and attention (Test of Attentional Vigilance response time r = .79, intermediate for memory (digit span r = .63 but lower for the executive function indices (Wisconsin/Berg Card Sorting Test perseverative errors = .45, Tower of London moves = .15. Significant practice effects were identified on several indices of executive function.Conclusions. These results are broadly supportive of the reliability and validity of individual PEBL tests in this sample. These findings indicate that the freely downloadable, open-source PEBL battery (http://pebl.sourceforge.net is a versatile research tool to study individual differences in neurocognitive performance.

  8. Reliability and validity of neurobehavioral function on the Psychology Experimental Building Language test battery in young adults.

    Science.gov (United States)

    Piper, Brian J; Mueller, Shane T; Geerken, Alexander R; Dixon, Kyle L; Kroliczak, Gregory; Olsen, Reid H J; Miller, Jeremy K

    2015-01-01

    Background. The Psychology Experiment Building Language (PEBL) software consists of over one-hundred computerized tests based on classic and novel cognitive neuropsychology and behavioral neurology measures. Although the PEBL tests are becoming more widely utilized, there is currently very limited information about the psychometric properties of these measures. Methods. Study I examined inter-relationships among nine PEBL tests including indices of motor-function (Pursuit Rotor and Dexterity), attention (Test of Attentional Vigilance and Time-Wall), working memory (Digit Span Forward), and executive-function (PEBL Trail Making Test, Berg/Wisconsin Card Sorting Test, Iowa Gambling Test, and Mental Rotation) in a normative sample (N = 189, ages 18-22). Study II evaluated test-retest reliability with a two-week interest interval between administrations in a separate sample (N = 79, ages 18-22). Results. Moderate intra-test, but low inter-test, correlations were observed and ceiling/floor effects were uncommon. Sex differences were identified on the Pursuit Rotor (Cohen's d = 0.89) and Mental Rotation (d = 0.31) tests. The correlation between the test and retest was high for tests of motor learning (Pursuit Rotor time on target r = .86) and attention (Test of Attentional Vigilance response time r = .79), intermediate for memory (digit span r = .63) but lower for the executive function indices (Wisconsin/Berg Card Sorting Test perseverative errors = .45, Tower of London moves = .15). Significant practice effects were identified on several indices of executive function. Conclusions. These results are broadly supportive of the reliability and validity of individual PEBL tests in this sample. These findings indicate that the freely downloadable, open-source PEBL battery (http://pebl.sourceforge.net) is a versatile research tool to study individual differences in neurocognitive performance.

  9. Systemic functional grammar in natural language generation linguistic description and computational representation

    CERN Document Server

    Teich, Elke

    1999-01-01

    This volume deals with the computational application of systemic functional grammar (SFG) for natural language generation. In particular, it describes the implementation of a fragment of the grammar of German in the computational framework of KOMET-PENMAN for multilingual generation. The text also presents a specification of explicit well-formedness constraints on syntagmatic structure which are defined in the form of typed feature structures. It thus achieves a model of systemic functional grammar that unites both the strengths of systemics, such as stratification, functional diversification

  10. Visualizing Patient Journals by Combining Vital Signs Monitoring and Natural Language Processing

    DEFF Research Database (Denmark)

    Vilic, Adnan; Petersen, John Asger; Hoppe, Karsten

    2016-01-01

    This paper presents a data-driven approach to graphically presenting text-based patient journals while still maintaining all textual information. The system first creates a timeline representation of a patients’ physiological condition during an admission, which is assessed by electronically...... monitoring vital signs and then combining these into Early Warning Scores (EWS). Hereafter, techniques from Natural Language Processing (NLP) are applied on the existing patient journal to extract all entries. Finally, the two methods are combined into an interactive timeline featuring the ability to see...... drastic changes in the patients’ health, and thereby enabling staff to see where in the journal critical events have taken place....

  11. Natural Language Processing Approach for Searching the Quran: Quick and Intuitive

    Directory of Open Access Journals (Sweden)

    Zainal Abidah

    2017-01-01

    Full Text Available The Quran is a scripture that acts as the main reference to people which their religion is Islam. It covers information from politics to science, with vast amount of information that requires effort to uncover the knowledge behind it. Today, the emergence of smartphones has led to the development of a wide-range application for enhancing knowledge-seeking activities. This project proposes a mobile application that is taking a natural language approach to searching topics in the Quran based on keyword searching. The benefit of the application is two-fold; it is intuitive and it saves time.

  12. On the Possibility of ESP Data Use in Natural Language Processing

    OpenAIRE

    Knopp, Tomáš

    2011-01-01

    The aim of this bachelor thesis is to explore this image label database coming from the ESP game from the natural language processing (NLP) point of view. ESP game is an online game, in which human players do useful work - they label images. The output of the ESP game is then a database of images and their labels. What interests us is whether the data collected in the process of labeling images will be of any use in NLP tasks. Specifically, we are interested in the tasks of automatic corefere...

  13. Knowledge acquisition from natural language for expert systems based on classification problem-solving methods

    Science.gov (United States)

    Gomez, Fernando

    1989-01-01

    It is shown how certain kinds of domain independent expert systems based on classification problem-solving methods can be constructed directly from natural language descriptions by a human expert. The expert knowledge is not translated into production rules. Rather, it is mapped into conceptual structures which are integrated into long-term memory (LTM). The resulting system is one in which problem-solving, retrieval and memory organization are integrated processes. In other words, the same algorithm and knowledge representation structures are shared by these processes. As a result of this, the system can answer questions, solve problems or reorganize LTM.

  14. Detecting inpatient falls by using natural language processing of electronic medical records

    Directory of Open Access Journals (Sweden)

    Toyabe Shin-ichi

    2012-12-01

    Full Text Available Abstract Background Incident reporting is the most common method for detecting adverse events in a hospital. However, under-reporting or non-reporting and delay in submission of reports are problems that prevent early detection of serious adverse events. The aim of this study was to determine whether it is possible to promptly detect serious injuries after inpatient falls by using a natural language processing method and to determine which data source is the most suitable for this purpose. Methods We tried to detect adverse events from narrative text data of electronic medical records by using a natural language processing method. We made syntactic category decision rules to detect inpatient falls from text data in electronic medical records. We compared how often the true fall events were recorded in various sources of data including progress notes, discharge summaries, image order entries and incident reports. We applied the rules to these data sources and compared F-measures to detect falls between these data sources with reference to the results of a manual chart review. The lag time between event occurrence and data submission and the degree of injury were compared. Results We made 170 syntactic rules to detect inpatient falls by using a natural language processing method. Information on true fall events was most frequently recorded in progress notes (100%, incident reports (65.0% and image order entries (12.5%. However, F-measure to detect falls using the rules was poor when using progress notes (0.12 and discharge summaries (0.24 compared with that when using incident reports (1.00 and image order entries (0.91. Since the results suggested that incident reports and image order entries were possible data sources for prompt detection of serious falls, we focused on a comparison of falls found by incident reports and image order entries. Injury caused by falls found by image order entries was significantly more severe than falls detected by

  15. Semi-supervised learning and domain adaptation in natural language processing

    CERN Document Server

    Søgaard, Anders

    2013-01-01

    This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias.This book is intended to be both

  16. Reliability and effectiveness of early warning systems for natural hazards: Concept and application to debris flow warning

    International Nuclear Information System (INIS)

    Sättele, Martina; Bründl, Michael; Straub, Daniel

    2015-01-01

    Early Warning Systems (EWS) are increasingly applied to mitigate the risks posed by natural hazards. To compare the effect of EWS with alternative risk reduction measures and to optimize their design and operation, their reliability and effectiveness must be quantified. In the present contribution, a framework approach to the evaluation of threshold-based EWS for natural hazards is presented. The system reliability is classically represented by the Probability of Detection (POD) and Probability of False Alarms (PFA). We demonstrate how the EWS effectiveness, which is a measure of risk reduction, can be formulated as a function of POD and PFA. To model the EWS and compute the reliability, we develop a framework based on Bayesian Networks, which is further extended to a decision graph, facilitating the optimization of the warning system. In a case study, the framework is applied to the assessment of an existing debris flow EWS. The application demonstrates the potential of the framework for identifying the important factors influencing the effectiveness of the EWS and determining optimal warning strategies and system configurations. - Highlights: • Warning systems are increasingly applied measures to reduce natural hazard risks. • Bayesian Networks (BN) are powerful tools to quantify warning system's reliability. • The effectiveness is defined to assess the optimality of warning systems. • By extending BNs to decision graphs, the optimal warning strategy is identified. • Sensors positioning significantly influence the effectiveness of warning systems

  17. Questionnaire for Assessing Preschoolers’ Organizational Abilities in Their Natural Environments: Development and Establishment of Validity and Reliability

    Directory of Open Access Journals (Sweden)

    Gila Tubul-Lavy

    2017-01-01

    Full Text Available Despite the consensus in the literature regarding the importance of organizational abilities in performing daily tasks, currently there is no assessment that focuses exclusively on such abilities among young children. The study aims to develop a Questionnaire for Assessing Preschoolers’ Organizational Abilities (QAPOA, Parents’ and Teachers’ versions, and to examine their reliability and validity. QAPOA was distributed to preschool teachers and parents of 215 typically developing 4–5.6-year-old children. The teachers’ and parents’ versions demonstrated good internal consistency. Factor analysis performed to examine the tool’s content validity yielded two factors: motor-based and language-based OA. Furthermore, both versions of the questionnaire demonstrated significant differences between OA among boys and girls. Concurrent validity was demonstrated between the QAPOA total scores and the equivalent subscale of the BRIEF-P. Given these findings, different cut-off scores were established for identifying boys and girls with either motor-based and/or language-based OA. The results indicate that both the teachers’ and parents’ versions of the QAPOA are reliable and valid measures of children’s organizational abilities. The questionnaires can assess and identify risk for organizational disabilities as early as preschool age. Thus, it can contribute to the planning of appropriate intervention programs and the prevention of difficulties in the future.

  18. Constructed Action, the Clause and the Nature of Syntax in Finnish Sign Language

    Directory of Open Access Journals (Sweden)

    Jantunen Tommi

    2017-01-01

    Full Text Available This paper investigates the interplay of constructed action and the clause in Finnish Sign Language (FinSL. Constructed action is a form of gestural enactment in which the signers use their hands, face and other parts of the body to represent the actions, thoughts or feelings of someone they are referring to in the discourse. With the help of frequencies calculated from corpus data, this article shows firstly that when FinSL signers are narrating a story, there are differences in how they use constructed action. Then the paper argues that there are differences also in the prototypical structure, linkage type and non-manual activity of clauses, depending on the presence or non-presence of constructed action. Finally, taking the view that gesturality is an integral part of language, the paper discusses the nature of syntax in sign languages and proposes a conceptualization in which syntax is seen as a set of norms distributed on a continuum between a categorial-conventional end and a gradient-unconventional end.

  19. Children with Specific Language Impairment and Their Families: A Future View of Nature Plus Nurture and New Technologies for Comprehensive Language Intervention Strategies.

    Science.gov (United States)

    Rice, Mabel L

    2016-11-01

    Future perspectives on children with language impairments are framed from what is known about children with specific language impairment (SLI). A summary of the current state of services is followed by discussion of how these children can be overlooked and misunderstood and consideration of why it is so hard for some children to acquire language when it is effortless for most children. Genetic influences are highlighted, with the suggestion that nature plus nurture should be considered in present as well as future intervention approaches. A nurture perspective highlights the family context of the likelihood of SLI for some of the children. Future models of the causal pathways may provide more specific information to guide gene-treatment decisions, in ways parallel to current personalized medicine approaches. Future treatment options can build on the potential of electronic technologies and social media to provide personalized treatment methods available at a time and place convenient for the person to use as often as desired. The speech-language pathologist could oversee a wide range of treatment options and monitor evidence provided electronically to evaluate progress and plan future treatment steps. Most importantly, future methods can provide lifelong language acquisition activities that maintain the privacy and dignity of persons with language impairment, and in so doing will in turn enhance the effectiveness of speech-language pathologists. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. Language of the Earth: Exploring Natural Hazards through a Literary Anthology

    Science.gov (United States)

    Malamud, B. D.; Rhodes, F. H. T.

    2009-04-01

    This paper explores natural hazards teaching and communications through the use of a literary anthology of writings about the earth aimed at non-experts. Teaching natural hazards in high-school and university introductory Earth Science and Geography courses revolves mostly around lectures, examinations, and laboratory demonstrations/activities. Often the results of such a course are that a student 'memorizes' the answers, and is penalized when they miss a given fact [e.g., "You lost one point because you were off by 50 km/hr on the wind speed of an F5 tornado."] Although facts and general methodologies are certainly important when teaching natural hazards, it is a strong motivation to a student's assimilation of, and enthusiasm for, this knowledge, if supplemented by writings about the Earth. In this paper, we discuss a literary anthology which we developed [Language of the Earth, Rhodes, Stone, Malamud, Wiley-Blackwell, 2008] which includes many descriptions about natural hazards. Using first- and second-hand accounts of landslides, earthquakes, tsunamis, floods and volcanic eruptions, through the writings of McPhee, Gaskill, Voltaire, Austin, Cloos, and many others, hazards become 'alive', and more than 'just' a compilation of facts and processes. Using short excerpts such as these, or other similar anthologies, of remarkably written accounts and discussions about natural hazards results in 'dry' facts becoming more than just facts. These often highly personal viewpoints of our catostrophic world, provide a useful supplement to a student's understanding of the turbulent world in which we live.

  1. Comparative study on the customization of natural language interfaces to databases.

    Science.gov (United States)

    Pazos R, Rodolfo A; Aguirre L, Marco A; González B, Juan J; Martínez F, José A; Pérez O, Joaquín; Verástegui O, Andrés A

    2016-01-01

    In the last decades the popularity of natural language interfaces to databases (NLIDBs) has increased, because in many cases information obtained from them is used for making important business decisions. Unfortunately, the complexity of their customization by database administrators make them difficult to use. In order for a NLIDB to obtain a high percentage of correctly translated queries, it is necessary that it is correctly customized for the database to be queried. In most cases the performance reported in NLIDB literature is the highest possible; i.e., the performance obtained when the interfaces were customized by the implementers. However, for end users it is more important the performance that the interface can yield when the NLIDB is customized by someone different from the implementers. Unfortunately, there exist very few articles that report NLIDB performance when the NLIDBs are not customized by the implementers. This article presents a semantically-enriched data dictionary (which permits solving many of the problems that occur when translating from natural language to SQL) and an experiment in which two groups of undergraduate students customized our NLIDB and English language frontend (ELF), considered one of the best available commercial NLIDBs. The experimental results show that, when customized by the first group, our NLIDB obtained a 44.69 % of correctly answered queries and ELF 11.83 % for the ATIS database, and when customized by the second group, our NLIDB attained 77.05 % and ELF 13.48 %. The performance attained by our NLIDB, when customized by ourselves was 90 %.

  2. Language and Interactional Discourse: Deconstrusting the Talk- Generating Machinery in Natural Convresation

    Directory of Open Access Journals (Sweden)

    Amaechi Uneke Enyi

    2015-08-01

    Full Text Available The study entitled. “Language and Interactional Discourse: Deconstructing the Talk - Generating Machinery in Natural Conversation,” is an analysis of spontaneous and informal conversation. The study, carried out in the theoretical and methodological tradition of Ethnomethodology, was aimed at explicating how ordinary talk is organized and produced, how people coordinate their talk –in- interaction, how meanings are determined, and the role of talk in the wider social processes. The study followed the basic assumption of conversation analysis which is, that talk is not just a product of two ‘speakers - hearers’ who attempt to exchange information or convey messages to each other. Rather, participants in conversation are seen to be mutually orienting to, and collaborating in order to achieve orderly and meaningful communication. The analytic objective is therefore to make clear these procedures on which speakers rely to produce utterances and by which they make sense of other speakers’ talk. The datum used for this study was a recorded informal conversation between two (and later three middle- class civil servants who are friends. The recording was done in such a way that the participants were not aware that they were being recorded. The recording was later transcribed in a way that we believe is faithful to the spontaneity and informality of the talk. Our finding showed that conversation has its own features and is an ordered and structured social day by- day event. Specifically, utterances are designed and informed by organized procedures, methods and resources which are tied to the contexts in which they are produced, and which participants are privy to by virtue of their membership of a culture or a natural language community.  Keywords: Language, Discourse and Conversation

  3. Teaching the tacit knowledge of programming to noviceswith natural language tutoring

    Science.gov (United States)

    Lane, H. Chad; Vanlehn, Kurt

    2005-09-01

    For beginning programmers, inadequate problem solving and planning skills are among the most salient of their weaknesses. In this paper, we test the efficacy of natural language tutoring to teach and scaffold acquisition of these skills. We describe ProPL (Pro-PELL), a dialogue-based intelligent tutoring system that elicits goal decompositions and program plans from students in natural language. The system uses a variety of tutoring tactics that leverage students' intuitive understandings of the problem, how it might be solved, and the underlying concepts of programming. We report the results of a small-scale evaluation comparing students who used ProPL with a control group who read the same content. Our primary findings are that students who received tutoring from ProPL seem to have developed an improved ability to solve the composition problem and displayed behaviors that suggest they were able to think at greater levels of abstraction than students in the read-only group.

  4. Natural language processing systems for capturing and standardizing unstructured clinical information: A systematic review.

    Science.gov (United States)

    Kreimeyer, Kory; Foster, Matthew; Pandey, Abhishek; Arya, Nina; Halford, Gwendolyn; Jones, Sandra F; Forshee, Richard; Walderhaug, Mark; Botsis, Taxiarchis

    2017-09-01

    We followed a systematic approach based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses to identify existing clinical natural language processing (NLP) systems that generate structured information from unstructured free text. Seven literature databases were searched with a query combining the concepts of natural language processing and structured data capture. Two reviewers screened all records for relevance during two screening phases, and information about clinical NLP systems was collected from the final set of papers. A total of 7149 records (after removing duplicates) were retrieved and screened, and 86 were determined to fit the review criteria. These papers contained information about 71 different clinical NLP systems, which were then analyzed. The NLP systems address a wide variety of important clinical and research tasks. Certain tasks are well addressed by the existing systems, while others remain as open challenges that only a small number of systems attempt, such as extraction of temporal information or normalization of concepts to standard terminologies. This review has identified many NLP systems capable of processing clinical free text and generating structured output, and the information collected and evaluated here will be important for prioritizing development of new approaches for clinical NLP. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Text to Speech Berbasis Natural Language pada Aplikasi Pembelajaran Tenses Bahasa Inggris

    Directory of Open Access Journals (Sweden)

    Amak Yunus

    2014-09-01

    Full Text Available Bahasa adalah sebuah cara berkomunikasi secara sistematis dengan menggunakan suara atau simbol-simbol yang memiliki arti, yang diucapkan melalui mulut. Bahasa juga ditulis dengan mengikuti kaidah yang berlaku. Salah satu bahasa yang banyak digunakan di belahan dunia adalah Bahasa Inggris. Namun ada beberapa kendala apabila kita belajar kepada seorang guru atau instruktur. Waktu yang diberikan seorang guru, terbatas pada jam sekolah atau les saja. Bila siswa pulang sekolah atau les, maka yang bersangkutan harus belajar bahasa Inggris secara mandiri. Dari permasalahan di atas, muncul sebuah ide tentang bagaimana membuat sebuah penelitian yang berkaitan dengan pembuatan aplikasi yang mampu memberikan pengetahuan kepada siswa tentang bagaimana belajar bahasa Inggris secara mandiri baik dari perubahan kalimat postif menjadi kalimat negatif dan kalimat tanya. Disamping itu, aplikasi ini juga mampu memberikan pengetahuan tentang bagaimana mengucapkan kalimat dalam bahasa Inggris. Pada intinya kontribusi yang dapat diperoleh dari hasil penelitian ini adalah pihak terkait dari tingkat SMP sampai dengan SMU/SMK, dapat menggunakan aplikasi text to speech berbasis natural language processing untuk mempelajari tenses pada bahasa Inggris. Aplikasi ini dapat memperdengarkan kalimat-kalimat pada bahasa inggris dan dapat menyusun kalimat tanya dan kalimat negatif berdasarkan kalimat positifnya dalam beberapa tenses bahasa Inggris. Kata Kunci : Natural language processing, Text to speech

  6. PERSISTENCE AND ACADEMIC ACHIEVEMENT IN FOREIGN LANGUAGE IN NATURAL SCIENCES STUDENTS

    Directory of Open Access Journals (Sweden)

    Alexandr I Krupnov

    2017-12-01

    Full Text Available The article discusses the results of empirical study of the association between variables of persistence and academic achievement in foreign languages. The sample includes students of the Faculty of Physics, Mathematics and Natural Science at the RUDN University ( n = 115, divided into 5 subsamples, two of which are featured in the present study (the most and the least successful students subsamples. Persistence as a personality trait is studied within A.I. Krupnov’s system-functional approach. A.I. Krupnov’s paper-and-pencil test was used to measure persistence variables. Academic achievement was measured according to the four parameters: Phonetics, Grammar, Speaking and Political vocabulary based on the grades students received during the academic year. The analysis revealed that persistence displays different associations with academic achievement variables in more and less successful students subsamples, the general prominence of this trait is more important for unsuccessful students. Phonetics is the academic achievement variable most associated with persistence due to its nature, a skill one can acquire through hard work and practice which is the definition of persistence. Grammar as an academic achievement variable is not associated with persistence and probably relates to other factors. Unsuccessful students may have difficulties in separating various aspects of language acquisition from each other which should be taken into consideration by the teachers.

  7. Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse.

    Science.gov (United States)

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy

    2012-11-01

    Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Knowledge-based machine indexing from natural language text: Knowledge base design, development, and maintenance

    Science.gov (United States)

    Genuardi, Michael T.

    1993-01-01

    One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and natural language expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.

  9. A semantic-based approach for querying linked data using natural language

    KAUST Repository

    Paredes-Valverde, Mario Andrés

    2016-01-11

    The semantic Web aims to provide to Web information with a well-defined meaning and make it understandable not only by humans but also by computers, thus allowing the automation, integration and reuse of high-quality information across different applications. However, current information retrieval mechanisms for semantic knowledge bases are intended to be only used by expert users. In this work, we propose a natural language interface that allows non-expert users the access to this kind of information through formulating queries in natural language. The present approach uses a domain-independent ontology model to represent the question\\'s structure and context. Also, this model allows determination of the answer type expected by the user based on a proposed question classification. To prove the effectiveness of our approach, we have conducted an evaluation in the music domain using LinkedBrainz, an effort to provide the MusicBrainz information as structured data on the Web by means of Semantic Web technologies. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.74 to 0.82 for a corpus of questions generated by a group of real-world end users. © The Author(s) 2015.

  10. A semantic-based approach for querying linked data using natural language

    KAUST Repository

    Paredes-Valverde, Mario André s; Valencia-Garcí a, Rafael; Rodriguez-Garcia, Miguel Angel; Colomo-Palacios, Ricardo; Alor-Herná ndez, Giner

    2016-01-01

    The semantic Web aims to provide to Web information with a well-defined meaning and make it understandable not only by humans but also by computers, thus allowing the automation, integration and reuse of high-quality information across different applications. However, current information retrieval mechanisms for semantic knowledge bases are intended to be only used by expert users. In this work, we propose a natural language interface that allows non-expert users the access to this kind of information through formulating queries in natural language. The present approach uses a domain-independent ontology model to represent the question's structure and context. Also, this model allows determination of the answer type expected by the user based on a proposed question classification. To prove the effectiveness of our approach, we have conducted an evaluation in the music domain using LinkedBrainz, an effort to provide the MusicBrainz information as structured data on the Web by means of Semantic Web technologies. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.74 to 0.82 for a corpus of questions generated by a group of real-world end users. © The Author(s) 2015.

  11. Reliability Assessment of 2400 MWth Gas-Cooled Fast Reactor Natural Circulation Decay Heat Removal in Pressurized Situations

    Directory of Open Access Journals (Sweden)

    C. Bassi

    2008-01-01

    Full Text Available As the 2400 MWth gas-cooled fast reactor concept makes use of passive safety features in combination with active safety systems, the question of natural circulation decay heat removal (NCDHR reliability and performance assessment into the ongoing probabilistic safety assessment in support to the reactor design, named “probabilistic engineering assessment” (PEA, constitutes a challenge. Within the 5th Framework Program for Research and Development (FPRD of the European Community, a methodology has been developed to evaluate the reliability of passive systems characterized by a moving fluid and whose operation is based on physical principles, such as the natural circulation. This reliability method for passive systems (RMPSs is based on uncertainties propagation into thermal-hydraulic (T-H calculations. The aim of this exercise is finally to determine the performance reliability of the DHR system operating in a “passive” mode, taking into account the uncertainties of parameters retained for thermal-hydraulical calculations performed with the CATHARE 2 code. According to the PEA preliminary results, exhibiting the weight of pressurized scenarios (i.e., with intact primary circuit boundary for the core damage frequency (CDF, the RMPS exercise is first focusing on the NCDHR performance at these T-H conditions.

  12. Selected Topics on Systems Modeling and Natural Language Processing: Editorial Introduction to the Issue 7 of CSIMQ

    Directory of Open Access Journals (Sweden)

    Witold Andrzejewski

    2016-07-01

    Full Text Available The seventh issue of Complex Systems Informatics and Modeling Quarterly presents five papers devoted to two distinct research topics: systems modeling and natural language processing (NLP. Both of these subjects are very important in computer science. Through modeling we can simplify the studied problem by concentrating on only one aspect at a time. Moreover, a properly constructed model allows the modeler to work on higher levels of abstraction and not having to concentrate on details. Since the size and complexity of information systems grows rapidly, creating good models of such systems is crucial. The analysis of natural language is slowly becoming a widely used tool in commerce and day to day life. Opinion mining allows recommender systems to provide accurate recommendations based on user-generated reviews. Speech recognition and NLP are the basis for such widely used personal assistants as Apple’s Siri, Microsoft’s Cortana, and Google Now. While a lot of work has already been done on natural language processing, the research usually concerns widely used languages, such as English. Consequently, natural language processing in languages other than English is very relevant subject and is addressed in this issue.

  13. Gesture language use in natural UI: pen-based sketching in conceptual design

    Science.gov (United States)

    Ma, Cuixia; Dai, Guozhong

    2003-04-01

    Natural User Interface is one of the important next generation interactions. Computers are not just the tools of many special people or areas but for most people. Ubiquitous computing makes the world magic and more comfortable. In the design domain, current systems, which need the detail information, cannot conveniently support the conceptual design of the early phrase. Pen and paper are the natural and simple tools to use in our daily life, especially in design domain. Gestures are the useful and natural mode in the interaction of pen-based. In natural UI, gestures can be introduced and used through the similar mode to the existing resources in interaction. But the gestures always are defined beforehand without the users' intention and recognized to represent something in certain applications without being transplanted to others. We provide the gesture description language (GDL) to try to cite the useful gestures to the applications conveniently. It can be used in terms of the independent control resource such as menus or icons in applications. So we give the idea from two perspectives: one from the application-dependent point of view and the other from the application-independent point of view.

  14. Status of the IAEA coordinated research project on natural circulation phenomena, modelling, and reliability of passive systems that utilize natural circulation

    International Nuclear Information System (INIS)

    Reyes, J.N. Jr.; Cleveland, J.; Aksan, N.

    2004-01-01

    The International Atomic Energy Agency (IAEA) has established a Coordinated Research Project (CRP) titled ''Natural Circulation Phenomena, Modelling and Reliability of Passive Safety Systems that Utilize Natural Circulation. '' This work has been organized within the framework of the IAEA Department of Nuclear Energy's Technical Working Groups for Advanced Technologies for Light Water Reactors and Heavy Water Reactors (the TWG-LWR and the TWG-HWR). This CRP is part of IAEA's effort to foster international collaborations that strive to improve the economic performance of future water-cooled nuclear power plants while meeting stringent safety requirements. Thus far, IAEA has established 12 research agreements with organizations from industrialized Member States and 3 research contracts with organizations from developing Member States. The objective of the CRP is to enhance our understanding of natural circulation phenomena in water-cooled reactors and passive safety systems. The CRP participants are particularly interested in establishing a natural circulation and passive safety system thermal hydraulic database that can be used to benchmark computer codes for advanced reactor systems design and safety analysis. An important aspect of this CRP relates to developing methodologies to assess the reliability of passive safety systems in advanced reactor designs. This paper describes the motivation and objectives of the CRP, the research plan, and the role of each of the participating organizations. (author)

  15. Prediction of Emergency Department Hospital Admission Based on Natural Language Processing and Neural Networks.

    Science.gov (United States)

    Zhang, Xingyu; Kim, Joyce; Patzer, Rachel E; Pitts, Stephen R; Patzer, Aaron; Schrager, Justin D

    2017-10-26

    To describe and compare logistic regression and neural network modeling strategies to predict hospital admission or transfer following initial presentation to Emergency Department (ED) triage with and without the addition of natural language processing elements. Using data from the National Hospital Ambulatory Medical Care Survey (NHAMCS), a cross-sectional probability sample of United States EDs from 2012 and 2013 survey years, we developed several predictive models with the outcome being admission to the hospital or transfer vs. discharge home. We included patient characteristics immediately available after the patient has presented to the ED and undergone a triage process. We used this information to construct logistic regression (LR) and multilayer neural network models (MLNN) which included natural language processing (NLP) and principal component analysis from the patient's reason for visit. Ten-fold cross validation was used to test the predictive capacity of each model and receiver operating curves (AUC) were then calculated for each model. Of the 47,200 ED visits from 642 hospitals, 6,335 (13.42%) resulted in hospital admission (or transfer). A total of 48 principal components were extracted by NLP from the reason for visit fields, which explained 75% of the overall variance for hospitalization. In the model including only structured variables, the AUC was 0.824 (95% CI 0.818-0.830) for logistic regression and 0.823 (95% CI 0.817-0.829) for MLNN. Models including only free-text information generated AUC of 0.742 (95% CI 0.731- 0.753) for logistic regression and 0.753 (95% CI 0.742-0.764) for MLNN. When both structured variables and free text variables were included, the AUC reached 0.846 (95% CI 0.839-0.853) for logistic regression and 0.844 (95% CI 0.836-0.852) for MLNN. The predictive accuracy of hospital admission or transfer for patients who presented to ED triage overall was good, and was improved with the inclusion of free text data from a patient

  16. How many kinds of reasoning? Inference, probability, and natural language semantics.

    Science.gov (United States)

    Lassiter, Daniel; Goodman, Noah D

    2015-03-01

    The "new paradigm" unifying deductive and inductive reasoning in a Bayesian framework (Oaksford & Chater, 2007; Over, 2009) has been claimed to be falsified by results which show sharp differences between reasoning about necessity vs. plausibility (Heit & Rotello, 2010; Rips, 2001; Rotello & Heit, 2009). We provide a probabilistic model of reasoning with modal expressions such as "necessary" and "plausible" informed by recent work in formal semantics of natural language, and show that it predicts the possibility of non-linear response patterns which have been claimed to be problematic. Our model also makes a strong monotonicity prediction, while two-dimensional theories predict the possibility of reversals in argument strength depending on the modal word chosen. Predictions were tested using a novel experimental paradigm that replicates the previously-reported response patterns with a minimal manipulation, changing only one word of the stimulus between conditions. We found a spectrum of reasoning "modes" corresponding to different modal words, and strong support for our model's monotonicity prediction. This indicates that probabilistic approaches to reasoning can account in a clear and parsimonious way for data previously argued to falsify them, as well as new, more fine-grained, data. It also illustrates the importance of careful attention to the semantics of language employed in reasoning experiments. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. CLAMP - a toolkit for efficiently building customized clinical natural language processing pipelines.

    Science.gov (United States)

    Soysal, Ergin; Wang, Jingqi; Jiang, Min; Wu, Yonghui; Pakhomov, Serguei; Liu, Hongfang; Xu, Hua

    2017-11-24

    Existing general clinical natural language processing (NLP) systems such as MetaMap and Clinical Text Analysis and Knowledge Extraction System have been successfully applied to information extraction from clinical text. However, end users often have to customize existing systems for their individual tasks, which can require substantial NLP skills. Here we present CLAMP (Clinical Language Annotation, Modeling, and Processing), a newly developed clinical NLP toolkit that provides not only state-of-the-art NLP components, but also a user-friendly graphic user interface that can help users quickly build customized NLP pipelines for their individual applications. Our evaluation shows that the CLAMP default pipeline achieved good performance on named entity recognition and concept encoding. We also demonstrate the efficiency of the CLAMP graphic user interface in building customized, high-performance NLP pipelines with 2 use cases, extracting smoking status and lab test values. CLAMP is publicly available for research use, and we believe it is a unique asset for the clinical NLP community. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Gender differences in natural language factors of subjective intoxication in college students: an experimental vignette study.

    Science.gov (United States)

    Levitt, Ash; Schlauch, Robert C; Bartholow, Bruce D; Sher, Kenneth J

    2013-12-01

    Examining the natural language college students use to describe various levels of intoxication can provide important insight into subjective perceptions of college alcohol use. Previous research (Levitt et al., Alcohol Clin Exp Res 2009; 33: 448) has shown that intoxication terms reflect moderate and heavy levels of intoxication and that self-use of these terms differs by gender among college students. However, it is still unknown whether these terms similarly apply to other individuals and, if so, whether similar gender differences exist. To address these issues, the current study examined the application of intoxication terms to characters in experimentally manipulated vignettes of naturalistic drinking situations within a sample of university undergraduates (n = 145). Findings supported and extended previous research by showing that other-directed applications of intoxication terms are similar to self-directed applications and depend on the gender of both the target and the user. Specifically, moderate intoxication terms were applied to and from women more than men, even when the character was heavily intoxicated, whereas heavy intoxication terms were applied to and from men more than women. The findings suggest that gender differences in the application of intoxication terms are other-directed as well as self-directed and that intoxication language can inform gender-specific prevention and intervention efforts targeting problematic alcohol use among college students. Copyright © 2013 by the Research Society on Alcoholism.

  19. Acceptability, reliability, and validity of the Stroke and Aphasia Quality of Life Scale-39 (SAQOL-39) across languages: a systematic review.

    Science.gov (United States)

    Ahmadi, Akram; Tohidast, Seyed Abolfazl; Mansuri, Banafshe; Kamali, Mohammad; Krishnan, Gopee

    2017-09-01

    This systematic review aimed to explore the acceptability, reliability, and validity of the Stroke and Aphasia Quality of Life-39 (SAQOL-39) scale across languages. We employed a systematic search of the online databases including MEDLINE (Pubmed), Science direct, Web of science, Psychinfo, Scopus, ProQuest, Google Scholar, and Cochrane library published between 2003 and 2016. We used PRISMA guidelines for conducting and reporting this review. Subsequently, screening of the titles and abstracts, extraction of data as well as the appraisal of the quality of relevant studies were carried out. The initial search returned 8185 studies. Subsequent screening and study selection processes narrowed them to 20, needing detailed review. Forward-backward translation scheme was the preferred method for translation of the SAQOL-39 from English to other languages. Mainly, the socio-cultural and linguistic adaptations were performed in the translated versions. Most versions of the SAQOL-39 showed high test-retest reliability and internal consistency. However, several psychometric properties including the validity and responsiveness were seldom reported in these versions. The SAQOL-39 scale showed high acceptability, and reliability across the languages reviewed in this study. Future translations may additionally focus on reporting the validity and responsiveness of the instrument.

  20. Conceptual dissonance: evaluating the efficacy of natural language processing techniques for validating translational knowledge constructs.

    Science.gov (United States)

    Payne, Philip R O; Kwok, Alan; Dhaval, Rakesh; Borlawsky, Tara B

    2009-03-01

    The conduct of large-scale translational studies presents significant challenges related to the storage, management and analysis of integrative data sets. Ideally, the application of methodologies such as conceptual knowledge discovery in databases (CKDD) provides a means for moving beyond intuitive hypothesis discovery and testing in such data sets, and towards the high-throughput generation and evaluation of knowledge-anchored relationships between complex bio-molecular and phenotypic variables. However, the induction of such high-throughput hypotheses is non-trivial, and requires correspondingly high-throughput validation methodologies. In this manuscript, we describe an evaluation of the efficacy of a natural language processing-based approach to validating such hypotheses. As part of this evaluation, we will examine a phenomenon that we have labeled as "Conceptual Dissonance" in which conceptual knowledge derived from two or more sources of comparable scope and granularity cannot be readily integrated or compared using conventional methods and automated tools.

  1. From Imitation to Prediction, Data Compression vs Recurrent Neural Networks for Natural Language Processing

    Directory of Open Access Journals (Sweden)

    Juan Andres Laura

    2018-03-01

    Full Text Available In recent studies Recurrent Neural Networks were used for generative processes and their surprising performance can be explained by their ability to create good predictions. In addition, Data Compression is also based on prediction. What the problem comes down to is whether a data compressor could be used to perform as well as recurrent neural networks in the natural language processing tasks of sentiment analysis and automatic text generation. If this is possible, then the problem comes down to determining if a compression algorithm is even more intelligent than a neural network in such tasks. In our journey, a fundamental difference between a Data Compression Algorithm and Recurrent Neural Networks has been discovered.

  2. On application of image analysis and natural language processing for music search

    Science.gov (United States)

    Gwardys, Grzegorz

    2013-10-01

    In this paper, I investigate a problem of finding most similar music tracks using, popular in Natural Language Processing, techniques like: TF-IDF and LDA. I de ned document as music track. Each music track is transformed to spectrogram, thanks that, I can use well known techniques to get words from images. I used SURF operation to detect characteristic points and novel approach for their description. The standard kmeans was used for clusterization. Clusterization is here identical with dictionary making, so after that I can transform spectrograms to text documents and perform TF-IDF and LDA. At the final, I can make a query in an obtained vector space. The research was done on 16 music tracks for training and 336 for testing, that are splitted in four categories: Hiphop, Jazz, Metal and Pop. Although used technique is completely unsupervised, results are satisfactory and encouraging to further research.

  3. Natural Language Processing in Serious Games: A state of the art.

    Directory of Open Access Journals (Sweden)

    Davide Picca

    2015-09-01

    Full Text Available In the last decades, Natural Language Processing (NLP has obtained a high level of success. Interactions between NLP and Serious Games have started and some of them already include NLP techniques. The objectives of this paper are twofold: on the one hand, providing a simple framework to enable analysis of potential uses of NLP in Serious Games and, on the other hand, applying the NLP framework to existing Serious Games and giving an overview of the use of NLP in pedagogical Serious Games. In this paper we present 11 serious games exploiting NLP techniques. We present them systematically, according to the following structure:  first, we highlight possible uses of NLP techniques in Serious Games, second, we describe the type of NLP implemented in the each specific Serious Game and, third, we provide a link to possible purposes of use for the different actors interacting in the Serious Game.

  4. Harmonization and development of resources and tools for Italian natural language processing within the PARLI project

    CERN Document Server

    Bosco, Cristina; Delmonte, Rodolfo; Moschitti, Alessandro; Simi, Maria

    2015-01-01

    The papers collected in this volume are selected as a sample of the progress in Natural Language Processing (NLP) performed within the Italian NLP community and especially attested by the PARLI project. PARLI (Portale per l’Accesso alle Risorse in Lingua Italiana) is a project partially funded by the Ministero Italiano per l’Università e la Ricerca (PRIN 2008) from 2008 to 2012 for monitoring and fostering the harmonic growth and coordination of the activities of Italian NLP. It was proposed by various teams of researchers working in Italian universities and research institutions. According to the spirit of the PARLI project, most of the resources and tools created within the project and here described are freely distributed and they did not terminate their life at the end of the project itself, hoping they could be a key factor in future development of computational linguistics.

  5. Workshop on using natural language processing applications for enhancing clinical decision making: an executive summary.

    Science.gov (United States)

    Pai, Vinay M; Rodgers, Mary; Conroy, Richard; Luo, James; Zhou, Ruixia; Seto, Belinda

    2014-02-01

    In April 2012, the National Institutes of Health organized a two-day workshop entitled 'Natural Language Processing: State of the Art, Future Directions and Applications for Enhancing Clinical Decision-Making' (NLP-CDS). This report is a summary of the discussions during the second day of the workshop. Collectively, the workshop presenters and participants emphasized the need for unstructured clinical notes to be included in the decision making workflow and the need for individualized longitudinal data tracking. The workshop also discussed the need to: (1) combine evidence-based literature and patient records with machine-learning and prediction models; (2) provide trusted and reproducible clinical advice; (3) prioritize evidence and test results; and (4) engage healthcare professionals, caregivers, and patients. The overall consensus of the NLP-CDS workshop was that there are promising opportunities for NLP and CDS to deliver cognitive support for healthcare professionals, caregivers, and patients.

  6. Accurate Identification of Fatty Liver Disease in Data Warehouse Utilizing Natural Language Processing.

    Science.gov (United States)

    Redman, Joseph S; Natarajan, Yamini; Hou, Jason K; Wang, Jingqi; Hanif, Muzammil; Feng, Hua; Kramer, Jennifer R; Desiderio, Roxanne; Xu, Hua; El-Serag, Hashem B; Kanwal, Fasiha

    2017-10-01

    Natural language processing is a powerful technique of machine learning capable of maximizing data extraction from complex electronic medical records. We utilized this technique to develop algorithms capable of "reading" full-text radiology reports to accurately identify the presence of fatty liver disease. Abdominal ultrasound, computerized tomography, and magnetic resonance imaging reports were retrieved from the Veterans Affairs Corporate Data Warehouse from a random national sample of 652 patients. Radiographic fatty liver disease was determined by manual review by two physicians and verified with an expert radiologist. A split validation method was utilized for algorithm development. For all three imaging modalities, the algorithms could identify fatty liver disease with >90% recall and precision, with F-measures >90%. These algorithms could be used to rapidly screen patient records to establish a large cohort to facilitate epidemiological and clinical studies and examine the clinic course and outcomes of patients with radiographic hepatic steatosis.

  7. Optimizing annotation resources for natural language de-identification via a game theoretic framework.

    Science.gov (United States)

    Li, Muqun; Carrell, David; Aberdeen, John; Hirschman, Lynette; Kirby, Jacqueline; Li, Bo; Vorobeychik, Yevgeniy; Malin, Bradley A

    2016-06-01

    Electronic medical records (EMRs) are increasingly repurposed for activities beyond clinical care, such as to support translational research and public policy analysis. To mitigate privacy risks, healthcare organizations (HCOs) aim to remove potentially identifying patient information. A substantial quantity of EMR data is in natural language form and there are concerns that automated tools for detecting identifiers are imperfect and leak information that can be exploited by ill-intentioned data recipients. Thus, HCOs have been encouraged to invest as much effort as possible to find and detect potential identifiers, but such a strategy assumes the recipients are sufficiently incentivized and capable of exploiting leaked identifiers. In practice, such an assumption may not hold true and HCOs may overinvest in de-identification technology. The goal of this study is to design a natural language de-identification framework, rooted in game theory, which enables an HCO to optimize their investments given the expected capabilities of an adversarial recipient. We introduce a Stackelberg game to balance risk and utility in natural language de-identification. This game represents a cost-benefit model that enables an HCO with a fixed budget to minimize their investment in the de-identification process. We evaluate this model by assessing the overall payoff to the HCO and the adversary using 2100 clinical notes from Vanderbilt University Medical Center. We simulate several policy alternatives using a range of parameters, including the cost of training a de-identification model and the loss in data utility due to the removal of terms that are not identifiers. In addition, we compare policy options where, when an attacker is fined for misuse, a monetary penalty is paid to the publishing HCO as opposed to a third party (e.g., a federal regulator). Our results show that when an HCO is forced to exhaust a limited budget (set to $2000 in the study), the precision and recall of the

  8. Generation of Natural-Language Textual Summaries from Longitudinal Clinical Records.

    Science.gov (United States)

    Goldstein, Ayelet; Shahar, Yuval

    2015-01-01

    Physicians are required to interpret, abstract and present in free-text large amounts of clinical data in their daily tasks. This is especially true for chronic-disease domains, but holds also in other clinical domains. We have recently developed a prototype system, CliniText, which, given a time-oriented clinical database, and appropriate formal abstraction and summarization knowledge, combines the computational mechanisms of knowledge-based temporal data abstraction, textual summarization, abduction, and natural-language generation techniques, to generate an intelligent textual summary of longitudinal clinical data. We demonstrate our methodology, and the feasibility of providing a free-text summary of longitudinal electronic patient records, by generating summaries in two very different domains - Diabetes Management and Cardiothoracic surgery. In particular, we explain the process of generating a discharge summary of a patient who had undergone a Coronary Artery Bypass Graft operation, and a brief summary of the treatment of a diabetes patient for five years.

  9. A Natural Language Intelligent Tutoring System for Training Pathologists - Implementation and Evaluation

    Science.gov (United States)

    El Saadawi, Gilan M.; Tseytlin, Eugene; Legowski, Elizabeth; Jukic, Drazen; Castine, Melissa; Fine, Jeffrey; Gormley, Robert; Crowley, Rebecca S.

    2009-01-01

    Introduction We developed and evaluated a Natural Language Interface (NLI) for an Intelligent Tutoring System (ITS) in Diagnostic Pathology. The system teaches residents to examine pathologic slides and write accurate pathology reports while providing immediate feedback on errors they make in their slide review and diagnostic reports. Residents can ask for help at any point in the case, and will receive context-specific feedback. Research Questions We evaluated (1) the performance of our natural language system, (2) the effect of the system on learning (3) the effect of feedback timing on learning gains and (4) the effect of ReportTutor on performance to self-assessment correlations. Methods The study uses a crossover 2×2 factorial design. We recruited 20 subjects from 4 academic programs. Subjects were randomly assigned to one of the four conditions - two conditions for the immediate interface, and two for the delayed interface. An expert dermatopathologist created a reference standard and 2 board certified AP/CP pathology fellows manually coded the residents' assessment reports. Subjects were given the opportunity to self grade their performance and we used a survey to determine student response to both interfaces. Results Our results show a highly significant improvement in report writing after one tutoring session with 4-fold increase in the learning gains with both interfaces but no effect of feedback timing on performance gains. Residents who used the immediate feedback interface first experienced a feature learning gain that is correlated with the number of cases they viewed. There was no correlation between performance and self-assessment in either condition. PMID:17934789

  10. LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER

    Science.gov (United States)

    Will, H.

    1994-01-01

    The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.

  11. Building an ontology of pulmonary diseases with natural language processing tools using textual corpora.

    Science.gov (United States)

    Baneyx, Audrey; Charlet, Jean; Jaulent, Marie-Christine

    2007-01-01

    Pathologies and acts are classified in thesauri to help physicians to code their activity. In practice, the use of thesauri is not sufficient to reduce variability in coding and thesauri are not suitable for computer processing. We think the automation of the coding task requires a conceptual modeling of medical items: an ontology. Our task is to help lung specialists code acts and diagnoses with software that represents medical knowledge of this concerned specialty by an ontology. The objective of the reported work was to build an ontology of pulmonary diseases dedicated to the coding process. To carry out this objective, we develop a precise methodological process for the knowledge engineer in order to build various types of medical ontologies. This process is based on the need to express precisely in natural language the meaning of each concept using differential semantics principles. A differential ontology is a hierarchy of concepts and relationships organized according to their similarities and differences. Our main research hypothesis is to apply natural language processing tools to corpora to develop the resources needed to build the ontology. We consider two corpora, one composed of patient discharge summaries and the other being a teaching book. We propose to combine two approaches to enrich the ontology building: (i) a method which consists of building terminological resources through distributional analysis and (ii) a method based on the observation of corpus sequences in order to reveal semantic relationships. Our ontology currently includes 1550 concepts and the software implementing the coding process is still under development. Results show that the proposed approach is operational and indicates that the combination of these methods and the comparison of the resulting terminological structures give interesting clues to a knowledge engineer for the building of an ontology.

  12. Creation of a simple natural language processing tool to support an imaging utilization quality dashboard.

    Science.gov (United States)

    Swartz, Jordan; Koziatek, Christian; Theobald, Jason; Smith, Silas; Iturrate, Eduardo

    2017-05-01

    Testing for venous thromboembolism (VTE) is associated with cost and risk to patients (e.g. radiation). To assess the appropriateness of imaging utilization at the provider level, it is important to know that provider's diagnostic yield (percentage of tests positive for the diagnostic entity of interest). However, determining diagnostic yield typically requires either time-consuming, manual review of radiology reports or the use of complex and/or proprietary natural language processing software. The objectives of this study were twofold: 1) to develop and implement a simple, user-configurable, and open-source natural language processing tool to classify radiology reports with high accuracy and 2) to use the results of the tool to design a provider-specific VTE imaging dashboard, consisting of both utilization rate and diagnostic yield. Two physicians reviewed a training set of 400 lower extremity ultrasound (UTZ) and computed tomography pulmonary angiogram (CTPA) reports to understand the language used in VTE-positive and VTE-negative reports. The insights from this review informed the arguments to the five modifiable parameters of the NLP tool. A validation set of 2,000 studies was then independently classified by the reviewers and by the tool; the classifications were compared and the performance of the tool was calculated. The tool was highly accurate in classifying the presence and absence of VTE for both the UTZ (sensitivity 95.7%; 95% CI 91.5-99.8, specificity 100%; 95% CI 100-100) and CTPA reports (sensitivity 97.1%; 95% CI 94.3-99.9, specificity 98.6%; 95% CI 97.8-99.4). The diagnostic yield was then calculated at the individual provider level and the imaging dashboard was created. We have created a novel NLP tool designed for users without a background in computer programming, which has been used to classify venous thromboembolism reports with a high degree of accuracy. The tool is open-source and available for download at http

  13. A UMLS-based spell checker for natural language processing in vaccine safety

    Directory of Open Access Journals (Sweden)

    Liu Fang

    2007-02-01

    Full Text Available Abstract Background The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP pipeline for AEFI reports. Methods We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1 error detection, (2 word list generation, (3 word list disambiguation and (4 error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. Results We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV for the spell checker were 74% (95% CI: 74–75, 100% (95% CI: 100–100, and 47% (95% CI: 46%–48%, respectively. Conclusion We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available

  14. Initial Assessment for K-12 English Language Support in Six Countries: Revisiting the Validity-Reliability Paradox

    Science.gov (United States)

    Sinclair, Jeanne; Lau, Clarissa

    2018-01-01

    It is common practice for K-12 schools to assess multilingual students' language proficiency to determine language support program placement. Because such programs can provide essential scaffolding, the policies guiding these assessments merit careful consideration. It is well accepted that quality assessments must be valid (representative of the…

  15. Basic Concepts in Classical Test Theory: Tests Aren't Reliable, the Nature of Alpha, and Reliability Generalization as a Meta-analytic Method.

    Science.gov (United States)

    Helms, LuAnn Sherbeck

    This paper discusses the fact that reliability is about scores and not tests and how reliability limits effect sizes. The paper also explores the classical reliability coefficients of stability, equivalence, and internal consistency. Stability is concerned with how stable test scores will be over time, while equivalence addresses the relationship…

  16. Ensuring Reliable Natural Gas-Fired Generation with Fuel Contracts and Storage - DOE/NETL-2017/1816

    Energy Technology Data Exchange (ETDEWEB)

    Myles, Paul T. [National Energy Technology Lab. (NETL), Albany, OR (United States); Labarbara, Kirk A. [National Energy Technology Lab. (NETL), Albany, OR (United States); Logan, Cecilia Elise [National Energy Technology Lab. (NETL), Albany, OR (United States)

    2017-11-17

    This report finds that natural gas-fired power plants purchase fuel both on the spot market and through firm supply contracts; there do not appear to be clear drivers propelling power plants toward one or the other type. Most natural gas-fired power generators are located near major natural gas transmission pipelines, and most natural gas contracts are currently procured on the spot market. Although there is some regional variation in the type of contract used, a strong regional pattern does not emerge. Whether gas prices are higher with spot or firm contracts varies by both region and year. Natural gas prices that push the generators higher in the supply curve would make them less likely to dispatch. Most of the natural gas generators discussed in this report would be unlikely to enter firm contracts if the agreed price would decrease their dispatch frequency. The price points at which these generators would be unlikely to enter a firm contract depends upon the region that the generator is in, and how dependent that region is on natural gas. The Electric Reliability Council of Texas (ERCOT) is more dependent on natural gas than either Eastern Interconnection or Western Interconnection. This report shows that above-ground storage is prohibitively expensive with respect to providing storage for an extended operational fuel reserve comparable to the amount of on-site fuel storage used for coal-fired plants. Further, both pressurized and atmospheric tanks require a significant amount of land for storage, even to support one day’s operation at full output. Underground storage offers the only viable option for 30-day operational storage of natural gas, and that is limited by the location of suitable geologic formations and depleted fields.

  17. Integrating Multi-Purpose Natural Language Understanding, Robot's Memory, and Symbolic Planning for Task Execution in Humanoid Robots

    DEFF Research Database (Denmark)

    Wächter, Mirko; Ovchinnikova, Ekaterina; Wittenbeck, Valerij

    2017-01-01

    We propose an approach for instructing a robot using natural language to solve complex tasks in a dynamic environment. In this study, we elaborate on a framework that allows a humanoid robot to understand natural language, derive symbolic representations of its sensorimotor experience, generate....... The framework is implemented within the robot development environment ArmarX. We evaluate the framework on the humanoid robot ARMAR-III in the context of two experiments: a demonstration of the real execution of a complex task in the kitchen environment on ARMAR-III and an experiment with untrained users...

  18. Classifying a Person's Degree of Accessibility From Natural Body Language During Social Human-Robot Interactions.

    Science.gov (United States)

    McColl, Derek; Jiang, Chuan; Nejat, Goldie

    2017-02-01

    For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately during social human-robot interactions (HRIs). In this paper, we present and discuss social HRI experiments we have conducted to investigate the development of an accessibility-aware social robot able to autonomously determine a person's degree of accessibility (rapport, openness) toward the robot based on the person's natural static body language. In particular, we present two one-on-one HRI experiments to: 1) determine the performance of our automated system in being able to recognize and classify a person's accessibility levels and 2) investigate how people interact with an accessibility-aware robot which determines its own behaviors based on a person's speech and accessibility levels.

  19. Genes, language, and the nature of scientific explanations: the case of Williams syndrome.

    Science.gov (United States)

    Musolino, Julien; Landau, Barbara

    2012-01-01

    In this article, we discuss two experiments of nature and their implications for the sciences of the mind. The first, Williams syndrome, bears on one of cognitive science's holy grails: the possibility of unravelling the causal chain between genes and cognition. We sketch the outline of a general framework to study the relationship between genes and cognition, focusing as our case study on the development of language in individuals with Williams syndrome. Our approach emphasizes the role of three key ingredients: the need to specify a clear level of analysis, the need to provide a theoretical account of the relevant cognitive structure at that level, and the importance of the (typical) developmental process itself. The promise offered by the case of Williams syndrome has also given rise to two strongly conflicting theoretical approaches-modularity and neuroconstructivism-themselves offshoots of a perennial debate between nativism and empiricism. We apply our framework to explore the tension created by these two conflicting perspectives. To this end, we discuss a second experiment of nature, which allows us to compare the two competing perspectives in what comes close to a controlled experimental setting. From this comparison, we conclude that the "meaningful debate assumption", a widespread assumption suggesting that neuroconstructivism and modularity address the same questions and represent genuine theoretical alternatives, rests on a fallacy.

  20. Validity and reliability of the Spanish-language version of the self-administered Leeds Assessment of Neuropathic Symptoms and Signs (S-LANSS) pain scale.

    Science.gov (United States)

    López-de-Uralde-Villanueva, I; Gil-Martínez, A; Candelas-Fernández, P; de Andrés-Ares, J; Beltrán-Alacreu, H; La Touche, R

    2016-12-08

    The self-administered Leeds Assessment of Neuropathic Symptoms and Signs (S-LANSS) scale is a tool designed to identify patients with pain with neuropathic features. To assess the validity and reliability of the Spanish-language version of the S-LANSS scale. Our study included a total of 182 patients with chronic pain to assess the convergent and discriminant validity of the S-LANSS; the sample was increased to 321 patients to evaluate construct validity and reliability. The validated Spanish-language version of the ID-Pain questionnaire was used as the criterion variable. All participants completed the ID-Pain, the S-LANSS, and the Numerical Rating Scale for pain. Discriminant validity was evaluated by analysing sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC). Construct validity was assessed with factor analysis and by comparing the odds ratio of each S-LANSS item to the total score. Convergent validity and reliability were evaluated with Pearson's r and Cronbach's alpha, respectively. The optimal cut-off point for S-LANSS was ≥12 points (AUC=.89; sensitivity=88.7; specificity=76.6). Factor analysis yielded one factor; furthermore, all items contributed significantly to the positive total score on the S-LANSS (P<.05). The S-LANSS showed a significant correlation with ID-Pain (r=.734, α=.71). The Spanish-language version of the S-LANSS is valid and reliable for identifying patients with chronic pain with neuropathic features. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Automatic Lung-RADS™ classification with a natural language processing system.

    Science.gov (United States)

    Beyer, Sebastian E; McKee, Brady J; Regis, Shawn M; McKee, Andrea B; Flacke, Sebastian; El Saadawi, Gilan; Wald, Christoph

    2017-09-01

    Our aim was to train a natural language processing (NLP) algorithm to capture imaging characteristics of lung nodules reported in a structured CT report and suggest the applicable Lung-RADS™ (LR) category. Our study included structured, clinical reports of consecutive CT lung screening (CTLS) exams performed from 08/2014 to 08/2015 at an ACR accredited Lung Cancer Screening Center. All patients screened were at high-risk for lung cancer according to the NCCN Guidelines ® . All exams were interpreted by one of three radiologists credentialed to read CTLS exams using LR using a standard reporting template. Training and test sets consisted of consecutive exams. Lung screening exams were divided into two groups: three training sets (500, 120, and 383 reports each) and one final evaluation set (498 reports). NLP algorithm results were compared with the gold standard of LR category assigned by the radiologist. The sensitivity/specificity of the NLP algorithm to correctly assign LR categories for suspicious nodules (LR 4) and positive nodules (LR 3/4) were 74.1%/98.6% and 75.0%/98.8% respectively. The majority of mismatches occurred in cases where pulmonary findings were present not currently addressed by LR. Misclassifications also resulted from the failure to identify exams as follow-up and the failure to completely characterize part-solid nodules. In a sub-group analysis among structured reports with standardized language, the sensitivity and specificity to detect LR 4 nodules were 87.0% and 99.5%, respectively. An NLP system can accurately suggest the appropriate LR category from CTLS exam findings when standardized reporting is used.

  2. A Discussion about Upgrading the Quick Script Platform to Create Natural Language based IoT Systems

    DEFF Research Database (Denmark)

    Khanna, Anirudh; Das, Bhagwan; Pandey, Bishwajeet

    2016-01-01

    With the advent of AI and IoT, the idea of incorporating smart things/appliances in our day to day life is converting into a reality. The paper discusses the possibilities and potential of designing IoT systems which can be controlled via natural language, with help of Quick Script as a development...

  3. Automated assessment of patients' self-narratives for posttraumatic stress disorder screening using natural language processing and text mining

    NARCIS (Netherlands)

    He, Qiwei; Veldkamp, Bernard P.; Glas, Cornelis A.W.; de Vries, Theo

    2017-01-01

    Patients’ narratives about traumatic experiences and symptoms are useful in clinical screening and diagnostic procedures. In this study, we presented an automated assessment system to screen patients for posttraumatic stress disorder via a natural language processing and text-mining approach. Four

  4. AIED 2009 Workshops Proceeedings Volume 10: Natural Language Processing in Support of Learning: Metrics, Feedback and Connectivity

    NARCIS (Netherlands)

    Dessus, Philippe; Trausan-Matu, Stefan; Van Rosmalen, Peter; Wild, Fridolin

    2009-01-01

    Dessus, P., Trausan-Matu, S., Van Rosmalen, P., & Wild, F. (Eds.) (2009). AIED 2009 Workshops Proceedings Volume 10 Natural Language Processing in Support of Learning: Metrics, Feedback and Connectivity. In S. D. Craig & D. Dicheva (Eds.), AIED 2009: 14th International Conference in Artificial

  5. It Is Incorrect To Say "The Test Is Reliable": Bad Language Habits Can Contribute to Incorrect or Meaningless Research Conclusions.

    Science.gov (United States)

    Thompson, Bruce

    Researchers too frequently consider the reliability of the scores they analyze, and this may lead to incorrect conclusions. Practice in this regard may be negatively influenced by telegraphic habits of speech implying that tests possess reliability and other measurement characteristics. Styles of speaking in journal articles, in textbooks, and in…

  6. Voice-enabled Knowledge Engine using Flood Ontology and Natural Language Processing

    Science.gov (United States)

    Sermet, M. Y.; Demir, I.; Krajewski, W. F.

    2015-12-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts, flood-related data, information and interactive visualizations for communities in Iowa. The IFIS is designed for use by general public, often people with no domain knowledge and limited general science background. To improve effective communication with such audience, we have introduced a voice-enabled knowledge engine on flood related issues in IFIS. Instead of navigating within many features and interfaces of the information system and web-based sources, the system provides dynamic computations based on a collection of built-in data, analysis, and methods. The IFIS Knowledge Engine connects to real-time stream gauges, in-house data sources, analysis and visualization tools to answer natural language questions. Our goal is the systematization of data and modeling results on flood related issues in Iowa, and to provide an interface for definitive answers to factual queries. The goal of the knowledge engine is to make all flood related knowledge in Iowa easily accessible to everyone, and support voice-enabled natural language input. We aim to integrate and curate all flood related data, implement analytical and visualization tools, and make it possible to compute answers from questions. The IFIS explicitly implements analytical methods and models, as algorithms, and curates all flood related data and resources so that all these resources are computable. The IFIS Knowledge Engine computes the answer by deriving it from its computational knowledge base. The knowledge engine processes the statement, access data warehouse, run complex database queries on the server-side and return outputs in various formats. This presentation provides an overview of IFIS Knowledge Engine, its unique information interface and functionality as an educational tool, and discusses the future plans

  7. Reliability and Validity of the English-, Chinese- and Malay-Language Versions of the World Health Organization Quality of Life (WHOQOL-BREF) Questionnaire in Singapore.

    Science.gov (United States)

    Cheung, Yin Bun; Yeo, Khung Keong; Chong, Kok Joon; Khoo, Eric Yh; Wee, Hwee Lin

    2017-12-01

    The World Health Organization Quality of Life (WHOQOL-BREF) questionnaire is a 26-item questionnaire that evaluates 4 domains of quality of life (QoL), namely Physical, Psychological, Social Relationships and Environment. This study aimed to evaluate the validity and reliability of the WHOQOL-BREF among Singapore residents aged 21 and above. We recruited participants from the general population by using multistage cluster sampling and participants from 2 hospitals by using convenience sampling. Participants completed either English, Chinese or Malay versions of the WHOQOL-BREF and the EuroQoL 5 Dimension 5 Levels (EQ-5D-5L) questionnaires. Confirmatory factor analysis, known-group validity, internal consistency (Cronbach's alpha) and test-retest reliability using the intraclass correlation coefficient (ICC) were performed. Data from 1316 participants were analysed (Chinese: 46.9%, Malay: 41.0% and Indian: 11.7%; 57.5% mean, mean standard deviation [SD, range] age: 51.9 [15.68, 24 to 90] years); 154 participants took part in the retest in various languages (English: 60, Chinese: 49 and Malay: 45). Tucker-Lewis Index (TLI) was 0.919, 0.913 and 0.909 for the English, Chinese and Malay versions, respectively. Cronbach's alpha exceeded 0.7 and ICC exceeded 0.4 for all domains in all language versions. The WHOQOL-BREF is valid and reliable for assessing QoL in Singapore. Model fit is reasonable with room for improvement.

  8. Influences of High-Level Features, Gaze, and Scene Transitions on the Reliability of BOLD Responses to Natural Movie Stimuli

    Science.gov (United States)

    Lu, Kun-Han; Hung, Shao-Chin; Wen, Haiguang; Marussich, Lauren; Liu, Zhongming

    2016-01-01

    Complex, sustained, dynamic, and naturalistic visual stimulation can evoke distributed brain activities that are highly reproducible within and across individuals. However, the precise origins of such reproducible responses remain incompletely understood. Here, we employed concurrent functional magnetic resonance imaging (fMRI) and eye tracking to investigate the experimental and behavioral factors that influence fMRI activity and its intra- and inter-subject reproducibility during repeated movie stimuli. We found that widely distributed and highly reproducible fMRI responses were attributed primarily to the high-level natural content in the movie. In the absence of such natural content, low-level visual features alone in a spatiotemporally scrambled control stimulus evoked significantly reduced degree and extent of reproducible responses, which were mostly confined to the primary visual cortex (V1). We also found that the varying gaze behavior affected the cortical response at the peripheral part of V1 and in the oculomotor network, with minor effects on the response reproducibility over the extrastriate visual areas. Lastly, scene transitions in the movie stimulus due to film editing partly caused the reproducible fMRI responses at widespread cortical areas, especially along the ventral visual pathway. Therefore, the naturalistic nature of a movie stimulus is necessary for driving highly reliable visual activations. In a movie-stimulation paradigm, scene transitions and individuals’ gaze behavior should be taken as potential confounding factors in order to properly interpret cortical activity that supports natural vision. PMID:27564573

  9. Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.

    Science.gov (United States)

    Barbosa, Sara; Pires, Gabriel; Nunes, Urbano

    2016-03-01

    Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. An Early Years Toolbox for Assessing Early Executive Function, Language, Self-Regulation, and Social Development: Validity, Reliability, and Preliminary Norms.

    Science.gov (United States)

    Howard, Steven J; Melhuish, Edward

    2017-06-01

    Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years Toolbox (EYT) offers substantial advantages for early assessment of language, EF, self-regulation, and social development. In the current study, results of our large-scale administration of this toolbox to 1,764 preschool and early primary school students indicated very good reliability, convergent validity with existing measures, and developmental sensitivity. Results were also suggestive of better capture of children's emerging abilities relative to comparison measures. Preliminary norms are presented, showing a clear developmental trajectory across half-year age groups. The accessibility of the EYT, as well as its advantages over existing measures, offers considerably enhanced opportunities for objective measurement of young children's abilities to enable research and educational applications.

  11. Surmounting the Tower of Babel: Monolingual and bilingual 2-year-olds' understanding of the nature of foreign language words.

    Science.gov (United States)

    Byers-Heinlein, Krista; Chen, Ke Heng; Xu, Fei

    2014-03-01

    Languages function as independent and distinct conventional systems, and so each language uses different words to label the same objects. This study investigated whether 2-year-old children recognize that speakers of their native language and speakers of a foreign language do not share the same knowledge. Two groups of children unfamiliar with Mandarin were tested: monolingual English-learning children (n=24) and bilingual children learning English and another language (n=24). An English speaker taught children the novel label fep. On English mutual exclusivity trials, the speaker asked for the referent of a novel label (wug) in the presence of the fep and a novel object. Both monolingual and bilingual children disambiguated the reference of the novel word using a mutual exclusivity strategy, choosing the novel object rather than the fep. On similar trials with a Mandarin speaker, children were asked to find the referent of a novel Mandarin label kuò. Monolinguals again chose the novel object rather than the object with the English label fep, even though the Mandarin speaker had no access to conventional English words. Bilinguals did not respond systematically to the Mandarin speaker, suggesting that they had enhanced understanding of the Mandarin speaker's ignorance of English words. The results indicate that monolingual children initially expect words to be conventionally shared across all speakers-native and foreign. Early bilingual experience facilitates children's discovery of the nature of foreign language words. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Qualitative spatial logic descriptors from 3D indoor scenes to generate explanations in natural language.

    Science.gov (United States)

    Falomir, Zoe; Kluth, Thomas

    2018-05-01

    The challenge of describing 3D real scenes is tackled in this paper using qualitative spatial descriptors. A key point to study is which qualitative descriptors to use and how these qualitative descriptors must be organized to produce a suitable cognitive explanation. In order to find answers, a survey test was carried out with human participants which openly described a scene containing some pieces of furniture. The data obtained in this survey are analysed, and taking this into account, the QSn3D computational approach was developed which uses a XBox 360 Kinect to obtain 3D data from a real indoor scene. Object features are computed on these 3D data to identify objects in indoor scenes. The object orientation is computed, and qualitative spatial relations between the objects are extracted. These qualitative spatial relations are the input to a grammar which applies saliency rules obtained from the survey study and generates cognitive natural language descriptions of scenes. Moreover, these qualitative descriptors can be expressed as first-order logical facts in Prolog for further reasoning. Finally, a validation study is carried out to test whether the descriptions provided by QSn3D approach are human readable. The obtained results show that their acceptability is higher than 82%.

  13. Characterization of Change and Significance for Clinical Findings in Radiology Reports Through Natural Language Processing.

    Science.gov (United States)

    Hassanpour, Saeed; Bay, Graham; Langlotz, Curtis P

    2017-06-01

    We built a natural language processing (NLP) method to automatically extract clinical findings in radiology reports and characterize their level of change and significance according to a radiology-specific information model. We utilized a combination of machine learning and rule-based approaches for this purpose. Our method is unique in capturing different features and levels of abstractions at surface, entity, and discourse levels in text analysis. This combination has enabled us to recognize the underlying semantics of radiology report narratives for this task. We evaluated our method on radiology reports from four major healthcare organizations. Our evaluation showed the efficacy of our method in highlighting important changes (accuracy 99.2%, precision 96.3%, recall 93.5%, and F1 score 94.7%) and identifying significant observations (accuracy 75.8%, precision 75.2%, recall 75.7%, and F1 score 75.3%) to characterize radiology reports. This method can help clinicians quickly understand the key observations in radiology reports and facilitate clinical decision support, review prioritization, and disease surveillance.

  14. Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective

    Directory of Open Access Journals (Sweden)

    Nikolaos Aletras

    2016-10-01

    Full Text Available Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e., N-grams, and topics. Our models can predict the court’s decisions with a strong accuracy (79% on average. Our empirical analysis indicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. We also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis.

  15. Is Natural Language a Perigraphic Process? The Theorem about Facts and Words Revisited

    Directory of Open Access Journals (Sweden)

    Łukasz Dębowski

    2018-01-01

    Full Text Available As we discuss, a stationary stochastic process is nonergodic when a random persistent topic can be detected in the infinite random text sampled from the process, whereas we call the process strongly nonergodic when an infinite sequence of independent random bits, called probabilistic facts, is needed to describe this topic completely. Replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we adapt this property back to ergodic processes. Subsequently, we call a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length. We present a simple example of such a process. Moreover, we demonstrate an assertion which we call the theorem about facts and words. This proposition states that the number of probabilistic or algorithmic facts which can be inferred from a text drawn from a process must be roughly smaller than the number of distinct word-like strings detected in this text by means of the Prediction by Partial Matching (PPM compression algorithm. We also observe that the number of the word-like strings for a sample of plays by Shakespeare follows an empirical stepwise power law, in a stark contrast to Markov processes. Hence, we suppose that natural language considered as a process is not only non-Markov but also perigraphic.

  16. Natural Language Use and Couples’ Adjustment to Head and Neck Cancer

    Science.gov (United States)

    Badr, Hoda; Milbury, Kathrin; Majeed, Nadia; Carmack, Cindy L.; Ahmad, Zeba; Gritz, Ellen R.

    2016-01-01

    Objective This multimethod prospective study examined whether emotional disclosure and coping focus as conveyed through natural language use is associated with the psychological and marital adjustment of head and neck cancer patients and their spouses. Methods One-hundred twenty-three patients (85% men; age X‒=56.8 years, SD=10.4) and their spouses completed surveys prior to, following, and 4-months after engaging in a videotaped discussion about cancer in the laboratory. Linguistic Inquiry and Word Count (LIWC) software assessed counts of positive/negative emotion words and first-person singular (I-talk), second person (you-talk), and first-person plural (we-talk) pronouns. Using a Grounded Theory approach, discussions were also analyzed to describe how emotion words and pronouns were used and what was being discussed. Results Emotion words were most often used to disclose thoughts/feelings or worry/uncertainty about the future, and to express gratitude or acknowledgment to one’s partner. Although patients who disclosed more negative emotion during the discussion reported more positive mood following the discussion (ppsychological and marital adjustment were found. Patients used significantly more I-talk than spouses and spouses used significantly more you-talk than patients (p’sdistress at the 4-month follow-up assessment when their partners used more we-talk (p disclosure may be less important to one’s cancer adjustment than having a partner who one sees as instrumental to the coping process. PMID:27441867

  17. Detecting Target Objects by Natural Language Instructions Using an RGB-D Camera

    Directory of Open Access Journals (Sweden)

    Jiatong Bao

    2016-12-01

    Full Text Available Controlling robots by natural language (NL is increasingly attracting attention for its versatility, convenience and no need of extensive training for users. Grounding is a crucial challenge of this problem to enable robots to understand NL instructions from humans. This paper mainly explores the object grounding problem and concretely studies how to detect target objects by the NL instructions using an RGB-D camera in robotic manipulation applications. In particular, a simple yet robust vision algorithm is applied to segment objects of interest. With the metric information of all segmented objects, the object attributes and relations between objects are further extracted. The NL instructions that incorporate multiple cues for object specifications are parsed into domain-specific annotations. The annotations from NL and extracted information from the RGB-D camera are matched in a computational state estimation framework to search all possible object grounding states. The final grounding is accomplished by selecting the states which have the maximum probabilities. An RGB-D scene dataset associated with different groups of NL instructions based on different cognition levels of the robot are collected. Quantitative evaluations on the dataset illustrate the advantages of the proposed method. The experiments of NL controlled object manipulation and NL-based task programming using a mobile manipulator show its effectiveness and practicability in robotic applications.

  18. A natural language processing pipeline for pairing measurements uniquely across free-text CT reports.

    Science.gov (United States)

    Sevenster, Merlijn; Bozeman, Jeffrey; Cowhy, Andrea; Trost, William

    2015-02-01

    To standardize and objectivize treatment response assessment in oncology, guidelines have been proposed that are driven by radiological measurements, which are typically communicated in free-text reports defying automated processing. We study through inter-annotator agreement and natural language processing (NLP) algorithm development the task of pairing measurements that quantify the same finding across consecutive radiology reports, such that each measurement is paired with at most one other ("partial uniqueness"). Ground truth is created based on 283 abdomen and 311 chest CT reports of 50 patients each. A pre-processing engine segments reports and extracts measurements. Thirteen features are developed based on volumetric similarity between measurements, semantic similarity between their respective narrative contexts and structural properties of their report positions. A Random Forest classifier (RF) integrates all features. A "mutual best match" (MBM) post-processor ensures partial uniqueness. In an end-to-end evaluation, RF has precision 0.841, recall 0.807, F-measure 0.824 and AUC 0.971; with MBM, which performs above chance level (P0.960) indicates that the task is well defined. Domain properties and inter-section differences are discussed to explain superior performance in abdomen. Enforcing partial uniqueness has mixed but minor effects on performance. A combined machine learning-filtering approach is proposed for pairing measurements, which can support prospective (supporting treatment response assessment) and retrospective purposes (data mining). Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Bringing Chatbots into education: Towards Natural Language Negotiation of Open Learner Models

    Science.gov (United States)

    Kerlyl, Alice; Hall, Phil; Bull, Susan

    There is an extensive body of work on Intelligent Tutoring Systems: computer environments for education, teaching and training that adapt to the needs of the individual learner. Work on personalisation and adaptivity has included research into allowing the student user to enhance the system's adaptivity by improving the accuracy of the underlying learner model. Open Learner Modelling, where the system's model of the user's knowledge is revealed to the user, has been proposed to support student reflection on their learning. Increased accuracy of the learner model can be obtained by the student and system jointly negotiating the learner model. We present the initial investigations into a system to allow people to negotiate the model of their understanding of a topic in natural language. This paper discusses the development and capabilities of both conversational agents (or chatbots) and Intelligent Tutoring Systems, in particular Open Learner Modelling. We describe a Wizard-of-Oz experiment to investigate the feasibility of using a chatbot to support negotiation, and conclude that a fusion of the two fields can lead to developing negotiation techniques for chatbots and the enhancement of the Open Learner Model. This technology, if successful, could have widespread application in schools, universities and other training scenarios.

  20. EVALUATION OF SEMANTIC SIMILARITY FOR SENTENCES IN NATURAL LANGUAGE BY MATHEMATICAL STATISTICS METHODS

    Directory of Open Access Journals (Sweden)

    A. E. Pismak

    2016-03-01

    Full Text Available Subject of Research. The paper is focused on Wiktionary articles structural organization in the aspect of its usage as the base for semantic network. Wiktionary community references, article templates and articles markup features are analyzed. The problem of numerical estimation for semantic similarity of structural elements in Wiktionary articles is considered. Analysis of existing software for semantic similarity estimation of such elements is carried out; algorithms of their functioning are studied; their advantages and disadvantages are shown. Methods. Mathematical statistics methods were used to analyze Wiktionary articles markup features. The method of semantic similarity computing based on statistics data for compared structural elements was proposed.Main Results. We have concluded that there is no possibility for direct use of Wiktionary articles as the source for semantic network. We have proposed to find hidden similarity between article elements, and for that purpose we have developed the algorithm for calculation of confidence coefficients proving that each pair of sentences is semantically near. The research of quantitative and qualitative characteristics for the developed algorithm has shown its major performance advantage over the other existing solutions in the presence of insignificantly higher error rate. Practical Relevance. The resulting algorithm may be useful in developing tools for automatic Wiktionary articles parsing. The developed method could be used in computing of semantic similarity for short text fragments in natural language in case of algorithm performance requirements are higher than its accuracy specifications.

  1. Natural language processing using online analytic processing for assessing recommendations in radiology reports.

    Science.gov (United States)

    Dang, Pragya A; Kalra, Mannudeep K; Blake, Michael A; Schultz, Thomas J; Stout, Markus; Lemay, Paul R; Freshman, David J; Halpern, Elkan F; Dreyer, Keith J

    2008-03-01

    The study purpose was to describe the use of natural language processing (NLP) and online analytic processing (OLAP) for assessing patterns in recommendations in unstructured radiology reports on the basis of patient and imaging characteristics, such as age, gender, referring physicians, radiology subspecialty, modality, indications, diseases, and patient status (inpatient vs outpatient). A database of 4,279,179 radiology reports from a single tertiary health care center during a 10-year period (1995-2004) was created. The database includes reports of computed tomography, magnetic resonance imaging, fluoroscopy, nuclear medicine, ultrasound, radiography, mammography, angiography, special procedures, and unclassified imaging tests with patient demographics. A clinical data mining and analysis NLP program (Leximer, Nuance Inc, Burlington, Massachusetts) in conjunction with OLAP was used for classifying reports into those with recommendations (I(REC)) and without recommendations (N(REC)) for imaging and determining I(REC) rates for different patient age groups, gender, imaging modalities, indications, diseases, subspecialties, and referring physicians. In addition, temporal trends for I(REC) were also determined. There was a significant difference in the I(REC) rates in different age groups, varying between 4.8% (10-19 years) and 9.5% (>70 years) (P OLAP revealed considerable differences between recommendation trends for different imaging modalities and other patient and imaging characteristics.

  2. LINGUISTIC ANALYSIS FOR THE BELARUSIAN CORPUS WITH THE APPLICATION OF NATURAL LANGUAGE PROCESSING AND MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Yu. S. Hetsevich

    2017-01-01

    Full Text Available The article focuses on the problems existing in text-to-speech synthesis. Different morphological, lexical and syntactical elements were localized with the help of the Belarusian unit of NooJ program. Those types of errors, which occur in Belarusian texts, were analyzed and corrected. Language model and part of speech tagging model were built. The natural language processing of Belarusian corpus with the help of developed algorithm using machine learning was carried out. The precision of developed models of machine learning has been 80–90 %. The dictionary was enriched with new words for the further using it in the systems of Belarusian speech synthesis.

  3. Toward a Theory-Based Natural Language Capability in Robots and Other Embodied Agents: Evaluating Hausser's SLIM Theory and Database Semantics

    Science.gov (United States)

    Burk, Robin K.

    2010-01-01

    Computational natural language understanding and generation have been a goal of artificial intelligence since McCarthy, Minsky, Rochester and Shannon first proposed to spend the summer of 1956 studying this and related problems. Although statistical approaches dominate current natural language applications, two current research trends bring…

  4. Medical subdomain classification of clinical notes using a machine learning-based natural language processing approach.

    Science.gov (United States)

    Weng, Wei-Hung; Wagholikar, Kavishwar B; McCray, Alexa T; Szolovits, Peter; Chueh, Henry C

    2017-12-01

    The medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note. We constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets. The convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied. Our study shows that a supervised

  5. Developing the Evaluation Scale to Determine the Impact of Body Language in an Argument: Reliability & Validity Analysis

    Science.gov (United States)

    Karadag, Engin; Caliskan, Nihat; Yesil, Rustu

    2008-01-01

    In this research, it is aimed to develop a scale to observe the body language which is used during an argument. A sample group of 266 teacher candidates study at the departments of Class, Turkish or Social Sciences at the Faculty of Education was used in this study. A logical and statistical approach was pursued during the development of scale. An…

  6. Dialogue-Games: Meta-Communication Structures for Natural Language Interaction

    Science.gov (United States)

    1977-01-01

    analogy from Wittgenstein’s term "language game" ( Wittgenstein , 1958). However, Dialogue-games represent knowledge people have about language as used to...and memory of narrative discourse. CoRtiiiive PsycholoRy, 1977, 9, 77-110. Wittgenstein , L. Philosophical inve-ÜRalions (3rd ed.). New York

  7. The written language of signals as a means of natural literacy of deaf children

    Directory of Open Access Journals (Sweden)

    Giovana Fracari Hautrive

    2010-10-01

    Full Text Available Taking the theme literacy of deaf children is currently directing the eye to the practice teaching course that demands beyond the school. Questions moving to daily practice, became a challenge, requiring an investigative attitude. The article aims to problematize the process of literacy of deaf children. Reflection proposal emerges from daily practice. This structure is from yarns that include theoretical studies of Vigotskii (1989, 1994, 1996, 1998; Stumpf (2005, Quadros (1997; Bolzan (1998, 2002; Skliar (1997a, 1997b, 1998 . From which, problematizes the processes involved in the construction of written language. It is as a result, the importance of the instrumentalization of sign language as first language in education of deaf and learning of sign language writing. Important aspects for the deaf student is observed in the condition to be literate in their mother tongue. It points out the need for a redirect in the literacy of deaf children, so that important aspects of language and its role in the structuring of thought and its communicative aspect, are respected and considered in this process. Thus, it emphasizes the learning of the writing of sign language as fundamental, it should occupy a central role in the proposed teaching the class, encouraging the contradictions that put the student in a situation of cognitive conflict, while respecting the diversity inherent to each humans. It is considered that the production of sign language writing is an appropriate tool for the deaf students record their visual language.

  8. Population-Based Analysis of Histologically Confirmed Melanocytic Proliferations Using Natural Language Processing.

    Science.gov (United States)

    Lott, Jason P; Boudreau, Denise M; Barnhill, Ray L; Weinstock, Martin A; Knopp, Eleanor; Piepkorn, Michael W; Elder, David E; Knezevich, Steven R; Baer, Andrew; Tosteson, Anna N A; Elmore, Joann G

    2018-01-01

    Population-based information on the distribution of histologic diagnoses associated with skin biopsies is unknown. Electronic medical records (EMRs) enable automated extraction of pathology report data to improve our epidemiologic understanding of skin biopsy outcomes, specifically those of melanocytic origin. To determine population-based frequencies and distribution of histologically confirmed melanocytic lesions. A natural language processing (NLP)-based analysis of EMR pathology reports of adult patients who underwent skin biopsies at a large integrated health care delivery system in the US Pacific Northwest from January 1, 2007, through December 31, 2012. Skin biopsy procedure. The primary outcome was histopathologic diagnosis, obtained using an NLP-based system to process EMR pathology reports. We determined the percentage of diagnoses classified as melanocytic vs nonmelanocytic lesions. Diagnoses classified as melanocytic were further subclassified using the Melanocytic Pathology Assessment Tool and Hierarchy for Diagnosis (MPATH-Dx) reporting schema into the following categories: class I (nevi and other benign proliferations such as mildly dysplastic lesions typically requiring no further treatment), class II (moderately dysplastic and other low-risk lesions that may merit narrow reexcision with skin biopsies, performed on 47 529 patients, were examined. Nearly 1 in 4 skin biopsies were of melanocytic lesions (23%; n = 18 715), which were distributed according to MPATH-Dx categories as follows: class I, 83.1% (n = 15 558); class II, 8.3% (n = 1548); class III, 4.5% (n = 842); class IV, 2.2% (n = 405); and class V, 1.9% (n = 362). Approximately one-quarter of skin biopsies resulted in diagnoses of melanocytic proliferations. These data provide the first population-based estimates across the spectrum of melanocytic lesions ranging from benign through dysplastic to malignant. These results may serve as a foundation for future

  9. Towards natural language question generation for the validation of ontologies and mappings.

    Science.gov (United States)

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  10. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search.

    Science.gov (United States)

    Jay, Caroline; Harper, Simon; Dunlop, Ian; Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-14

    Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these "experts." Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the "Google generation" than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is "Google-like," enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F1,19=37.3, Pnatural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance

  11. NOBLE - Flexible concept recognition for large-scale biomedical natural language processing.

    Science.gov (United States)

    Tseytlin, Eugene; Mitchell, Kevin; Legowski, Elizabeth; Corrigan, Julia; Chavan, Girish; Jacobson, Rebecca S

    2016-01-14

    Natural language processing (NLP) applications are increasingly important in biomedical data analysis, knowledge engineering, and decision support. Concept recognition is an important component task for NLP pipelines, and can be either general-purpose or domain-specific. We describe a novel, flexible, and general-purpose concept recognition component for NLP pipelines, and compare its speed and accuracy against five commonly used alternatives on both a biological and clinical corpus. NOBLE Coder implements a general algorithm for matching terms to concepts from an arbitrary vocabulary set. The system's matching options can be configured individually or in combination to yield specific system behavior for a variety of NLP tasks. The software is open source, freely available, and easily integrated into UIMA or GATE. We benchmarked speed and accuracy of the system against the CRAFT and ShARe corpora as reference standards and compared it to MMTx, MGrep, Concept Mapper, cTAKES Dictionary Lookup Annotator, and cTAKES Fast Dictionary Lookup Annotator. We describe key advantages of the NOBLE Coder system and associated tools, including its greedy algorithm, configurable matching strategies, and multiple terminology input formats. These features provide unique functionality when compared with existing alternatives, including state-of-the-art systems. On two benchmarking tasks, NOBLE's performance exceeded commonly used alternatives, performing almost as well as the most advanced systems. Error analysis revealed differences in error profiles among systems. NOBLE Coder is comparable to other widely used concept recognition systems in terms of accuracy and speed. Advantages of NOBLE Coder include its interactive terminology builder tool, ease of configuration, and adaptability to various domains and tasks. NOBLE provides a term-to-concept matching system suitable for general concept recognition in biomedical NLP pipelines.

  12. A natural language processing program effectively extracts key pathologic findings from radical prostatectomy reports.

    Science.gov (United States)

    Kim, Brian J; Merchant, Madhur; Zheng, Chengyi; Thomas, Anil A; Contreras, Richard; Jacobsen, Steven J; Chien, Gary W

    2014-12-01

    Natural language processing (NLP) software programs have been widely developed to transform complex free text into simplified organized data. Potential applications in the field of medicine include automated report summaries, physician alerts, patient repositories, electronic medical record (EMR) billing, and quality metric reports. Despite these prospects and the recent widespread adoption of EMR, NLP has been relatively underutilized. The objective of this study was to evaluate the performance of an internally developed NLP program in extracting select pathologic findings from radical prostatectomy specimen reports in the EMR. An NLP program was generated by a software engineer to extract key variables from prostatectomy reports in the EMR within our healthcare system, which included the TNM stage, Gleason grade, presence of a tertiary Gleason pattern, histologic subtype, size of dominant tumor nodule, seminal vesicle invasion (SVI), perineural invasion (PNI), angiolymphatic invasion (ALI), extracapsular extension (ECE), and surgical margin status (SMS). The program was validated by comparing NLP results to a gold standard compiled by two blinded manual reviewers for 100 random pathology reports. NLP demonstrated 100% accuracy for identifying the Gleason grade, presence of a tertiary Gleason pattern, SVI, ALI, and ECE. It also demonstrated near-perfect accuracy for extracting histologic subtype (99.0%), PNI (98.9%), TNM stage (98.0%), SMS (97.0%), and dominant tumor size (95.7%). The overall accuracy of NLP was 98.7%. NLP generated a result in report. This novel program demonstrated high accuracy and efficiency identifying key pathologic details from the prostatectomy report within an EMR system. NLP has the potential to assist urologists by summarizing and highlighting relevant information from verbose pathology reports. It may also facilitate future urologic research through the rapid and automated creation of large databases.

  13. Using natural language processing and machine learning to identify gout flares from electronic clinical notes.

    Science.gov (United States)

    Zheng, Chengyi; Rashid, Nazia; Wu, Yi-Lin; Koblick, River; Lin, Antony T; Levy, Gerald D; Cheetham, T Craig

    2014-11-01

    Gout flares are not well documented by diagnosis codes, making it difficult to conduct accurate database studies. We implemented a computer-based method to automatically identify gout flares using natural language processing (NLP) and machine learning (ML) from electronic clinical notes. Of 16,519 patients, 1,264 and 1,192 clinical notes from 2 separate sets of 100 patients were selected as the training and evaluation data sets, respectively, which were reviewed by rheumatologists. We created separate NLP searches to capture different aspects of gout flares. For each note, the NLP search outputs became the ML system inputs, which provided the final classification decisions. The note-level classifications were grouped into patient-level gout flares. Our NLP+ML results were validated using a gold standard data set and compared with the claims-based method used by prior literatures. For 16,519 patients with a diagnosis of gout and a prescription for a urate-lowering therapy, we identified 18,869 clinical notes as gout flare positive (sensitivity 82.1%, specificity 91.5%): 1,402 patients with ≥3 flares (sensitivity 93.5%, specificity 84.6%), 5,954 with 1 or 2 flares, and 9,163 with no flare (sensitivity 98.5%, specificity 96.4%). Our method identified more flare cases (18,869 versus 7,861) and patients with ≥3 flares (1,402 versus 516) when compared to the claims-based method. We developed a computer-based method (NLP and ML) to identify gout flares from the clinical notes. Our method was validated as an accurate tool for identifying gout flares with higher sensitivity and specificity compared to previous studies. Copyright © 2014 by the American College of Rheumatology.

  14. Validation of natural language processing to extract breast cancer pathology procedures and results

    Directory of Open Access Journals (Sweden)

    Arika E Wieneke

    2015-01-01

    Full Text Available Background: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. Methods: We used an iterative approach of machine learning algorithms and constructed groups of related findings to identify breast-related procedures and results from free-text pathology reports. We evaluated the NLP system using an all-or-nothing approach to determine which reports could be processed entirely using NLP and which reports needed manual review beyond NLP. We divided 3234 reports for development (2910, 90%, and evaluation (324, 10% purposes using manually reviewed pathology data as our gold standard. Results: NLP correctly coded 12.7% of the evaluation set, flagged 49.1% of reports for manual review, incorrectly coded 30.8%, and correctly omitted 7.4% from the evaluation set due to irrelevancy (i.e. not breast-related. Common procedures and results were identified correctly (e.g. invasive ductal with 95.5% precision and 94.0% sensitivity, but entire reports were flagged for manual review because of rare findings and substantial variation in pathology report text. Conclusions: The NLP system we developed did not perform sufficiently for abstracting entire breast pathology reports. The all-or-nothing approach resulted in too broad of a scope of work and limited our flexibility to identify breast pathology procedures and results. Our NLP system was also limited by the lack of the gold standard data on rare findings and wide variation in pathology text. Focusing on individual, common elements and improving pathology text report standardization may improve performance.

  15. Dynamical Languages

    Science.gov (United States)

    Xie, Huimin

    The following sections are included: * Definition of Dynamical Languages * Distinct Excluded Blocks * Definition and Properties * L and L″ in Chomsky Hierarchy * A Natural Equivalence Relation * Symbolic Flows * Symbolic Flows and Dynamical Languages * Subshifts of Finite Type * Sofic Systems * Graphs and Dynamical Languages * Graphs and Shannon-Graphs * Transitive Languages * Topological Entropy

  16. Causal knowledge extraction by natural language processing in material science: a case study in chemical vapor deposition

    Directory of Open Access Journals (Sweden)

    Yuya Kajikawa

    2006-11-01

    Full Text Available Scientific publications written in natural language still play a central role as our knowledge source. However, due to the flood of publications, the literature survey process has become a highly time-consuming and tangled process, especially for novices of the discipline. Therefore, tools supporting the literature-survey process may help the individual scientist to explore new useful domains. Natural language processing (NLP is expected as one of the promising techniques to retrieve, abstract, and extract knowledge. In this contribution, NLP is firstly applied to the literature of chemical vapor deposition (CVD, which is a sub-discipline of materials science and is a complex and interdisciplinary field of research involving chemists, physicists, engineers, and materials scientists. Causal knowledge extraction from the literature is demonstrated using NLP.

  17. The Natural History of Human Language: Bridging the Gaps without Magic

    Science.gov (United States)

    Merker, Bjorn; Okanoya, Kazuo

    Human languages are quintessentially historical phenomena. Every known aspect of linguistic form and content is subject to change in historical time (Lehmann, 1995; Bybee, 2004). Many facts of language, syntactic no less than semantic, find their explanation in the historical processes that generated them. If adpositions were once verbs, then the fact that they tend to occur on the same side of their arguments as do verbs ("cross-category harmony": Hawkins, 1983) is a matter of historical contingency rather than a reflection of inherent structural constraints on human language (Delancey, 1993).

  18. A new, rapid and reliable method for the determination of reduced sulphur (S2-) species in natural water discharges

    International Nuclear Information System (INIS)

    Montegrossi, Giordano; Tassi, Franco; Vaselli, Orlando; Bidini, Eva; Minissale, Angelo

    2006-01-01

    The determination of reduced S species in natural waters is particularly difficult due to their high instability and chemical and physical interferences in the current analytical methods. In this paper a new, rapid and reliable analytical procedure is presented, named the Cd-IC method, for their determination as ΣS 2- via oxidation to SO 4 2- after chemical trapping with an ammonia-cadmium solution that allows precipitation of all the reduced S species as CdS. The S 2- -SO 4 is analysed by ion-chromatography. The main advantages of this method are: low cost, high stability of CdS precipitate, absence of interferences, low detection limit (0.01mg/L as SO 4 for 10mL of water) and low analytical error (about 5%). The proposed method has been applied to more than 100 water samples from different natural systems (water discharges and cold wells from volcanic and geothermal areas, crater lakes) in central-southern Italy

  19. Unpacking Big Systems -- Natural Language Processing Meets Network Analysis. A Study of Smart Grid Development in Denmark

    DEFF Research Database (Denmark)

    Jurowetzki, Roman

    and contained technological trajectories on a national level using a combination of methods from statistical natural language processing, vector space modelling and network analysis. The proposed approach does not aim at replacing the researcher or expert but rather offers the possibility to algorithmically...... in Denmark. Results show that in the explored case it is not mainly new technologies and applications that are driving change but innovative re-combinations of old and new technologies....

  20. Steering the conversation: A linguistic exploration of natural language interactions with a digital assistant during simulated driving.

    Science.gov (United States)

    Large, David R; Clark, Leigh; Quandt, Annie; Burnett, Gary; Skrypchuk, Lee

    2017-09-01

    Given the proliferation of 'intelligent' and 'socially-aware' digital assistants embodying everyday mobile technology - and the undeniable logic that utilising voice-activated controls and interfaces in cars reduces the visual and manual distraction of interacting with in-vehicle devices - it appears inevitable that next generation vehicles will be embodied by digital assistants and utilise spoken language as a method of interaction. From a design perspective, defining the language and interaction style that a digital driving assistant should adopt is contingent on the role that they play within the social fabric and context in which they are situated. We therefore conducted a qualitative, Wizard-of-Oz study to explore how drivers might interact linguistically with a natural language digital driving assistant. Twenty-five participants drove for 10 min in a medium-fidelity driving simulator while interacting with a state-of-the-art, high-functioning, conversational digital driving assistant. All exchanges were transcribed and analysed using recognised linguistic techniques, such as discourse and conversation analysis, normally reserved for interpersonal investigation. Language usage patterns demonstrate that interactions with the digital assistant were fundamentally social in nature, with participants affording the assistant equal social status and high-level cognitive processing capability. For example, participants were polite, actively controlled turn-taking during the conversation, and used back-channelling, fillers and hesitation, as they might in human communication. Furthermore, participants expected the digital assistant to understand and process complex requests mitigated with hedging words and expressions, and peppered with vague language and deictic references requiring shared contextual information and mutual understanding. Findings are presented in six themes which emerged during the analysis - formulating responses; turn-taking; back

  1. The Robbers and the Others – A Serious Game Using Natural Language Processing

    NARCIS (Netherlands)

    Toma, Irina; Brighiu, Stefan Mihai; Dascalu, Mihai; Trausan-Matu, Stefan

    2018-01-01

    Learning a new language includes multiple aspects, from vocabulary acquisition to exercising words in sentences, and developing discourse building capabilities. In most learning scenarios, students learn individually and interact only during classes; therefore, it is difficult to enhance their

  2. Dependency distance: A new perspective on the syntactic development in second language acquisition. Comment on "Dependency distance: A new perspective on syntactic patterns in natural language" by Haitao Liu et al.

    Science.gov (United States)

    Jiang, Jingyang; Ouyang, Jinghui

    2017-07-01

    Liu et al. [1] offers a clear and informative account of the use of dependency distance in studying natural languages, with a focus on the viewpoint that dependency distance minimization (DDM) can be regarded as a linguistic universal. We would like to add the perspective of employing dependency distance in the studies of second languages acquisition (SLA), particularly the studies of syntactic development.

  3. The Dutch language anterior cruciate ligament return to sport after injury scale (ACL-RSI) - validity and reliability.

    Science.gov (United States)

    Slagers, Anton J; Reininga, Inge H F; van den Akker-Scheek, Inge

    2017-02-01

    The ACL-Return to Sport after Injury scale (ACL-RSI) measures athletes' emotions, confidence in performance, and risk appraisal in relation to return to sport after ACL reconstruction. Aim of this study was to study the validity and reliability of the Dutch version of the ACL-RSI (ACL-RSI (NL)). Total 150 patients, who were 3-16 months postoperative, completed the ACL-RSI(NL) and 5 other questionnaires regarding psychological readiness to return to sports, knee-specific physical functioning, kinesiophobia, and health-specific locus of control. Construct validity of the ACL-RSI(NL) was determined with factor analysis and by exploring 10 hypotheses regarding correlations between ACL-RSI(NL) and the other questionnaires. For test-retest reliability, 107 patients (5-16 months postoperative) completed the ACL-RSI(NL) again 2 weeks after the first administration. Cronbach's alpha, Intraclass Correlation Coefficient (ICC), SEM, and SDC, were calculated. Bland-Altman analysis was conducted to assess bias between test and retest. Nine hypotheses (90%) were confirmed, indicating good construct validity. The ACL-RSI(NL) showed good internal consistency (Cronbach's alpha 0.94) and test-retest reliability (ICC 0.93). SEM was 5.5 and SDC was 15. A significant bias of 3.2 points between test and retest was found. Therefore, the ACL-RSI(NL) can be used to investigate psychological factors relevant to returning to sport after ACL reconstruction.

  4. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search

    Science.gov (United States)

    Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-01

    Background Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these “experts.” Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. Objective The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the “Google generation” than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Methods Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is “Google-like,” enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Results Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F 1,19=37.3, Peffect of task (F 3,57=6.3, Pinterface (F 1,19=18.0, Peffect of task (F 2,38=4.1, P=.025, Greenhouse

  5. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  6. In silico Evolutionary Developmental Neurobiology and the Origin of Natural Language

    Science.gov (United States)

    Szathmáry, Eörs; Szathmáry, Zoltán; Ittzés, Péter; Orbaán, Geroő; Zachár, István; Huszár, Ferenc; Fedor, Anna; Varga, Máté; Számadó, Szabolcs

    It is justified to assume that part of our genetic endowment contributes to our language skills, yet it is impossible to tell at this moment exactly how genes affect the language faculty. We complement experimental biological studies by an in silico approach in that we simulate the evolution of neuronal networks under selection for language-related skills. At the heart of this project is the Evolutionary Neurogenetic Algorithm (ENGA) that is deliberately biomimetic. The design of the system was inspired by important biological phenomena such as brain ontogenesis, neuron morphologies, and indirect genetic encoding. Neuronal networks were selected and were allowed to reproduce as a function of their performance in the given task. The selected neuronal networks in all scenarios were able to solve the communication problem they had to face. The most striking feature of the model is that it works with highly indirect genetic encoding--just as brains do.

  7. Mirror neurons and the social nature of language: the neural exploitation hypothesis.

    Science.gov (United States)

    Gallese, Vittorio

    2008-01-01

    This paper discusses the relevance of the discovery of mirror neurons in monkeys and of the mirror neuron system in humans to a neuroscientific account of primates' social cognition and its evolution. It is proposed that mirror neurons and the functional mechanism they underpin, embodied simulation, can ground within a unitary neurophysiological explanatory framework important aspects of human social cognition. In particular, the main focus is on language, here conceived according to a neurophenomenological perspective, grounding meaning on the social experience of action. A neurophysiological hypothesis--the "neural exploitation hypothesis"--is introduced to explain how key aspects of human social cognition are underpinned by brain mechanisms originally evolved for sensorimotor integration. It is proposed that these mechanisms were later on adapted as new neurofunctional architecture for thought and language, while retaining their original functions as well. By neural exploitation, social cognition and language can be linked to the experiential domain of action.

  8. Impact of Climate Change on Natural Snow Reliability, Snowmaking Capacities, and Wind Conditions of Ski Resorts in Northeast Turkey: A Dynamical Downscaling Approach

    Directory of Open Access Journals (Sweden)

    Osman Cenk Demiroglu

    2016-04-01

    Full Text Available Many ski resorts worldwide are going through deteriorating snow cover conditions due to anthropogenic warming trends. As the natural and the artificially supported, i.e., technical, snow reliability of ski resorts diminish, the industry approaches a deadlock. For this reason, impact assessment studies have become vital for understanding vulnerability of ski tourism. This study considers three resorts at one of the rapidly emerging ski destinations, Northeast Turkey, for snow reliability analyses. Initially one global circulation model is dynamically downscaled by using the regional climate model RegCM4.4 for 1971–2000 and 2021–2050 periods along the RCP4.5 greenhouse gas concentration pathway. Next, the projected climate outputs are converted into indicators of natural snow reliability, snowmaking capacity, and wind conditions. The results show an overall decline in the frequencies of naturally snow reliable days and snowmaking capacities between the two periods. Despite the decrease, only the lower altitudes of one ski resort would face the risk of losing natural snow reliability and snowmaking could still compensate for forming the base layer before the critical New Year’s week. On the other hand, adverse high wind conditions improve as to reduce the number of lift closure days at all resorts. Overall, this particular region seems to be relatively resilient against climate change.

  9. Computing Accurate Grammatical Feedback in a Virtual Writing Conference for German-Speaking Elementary-School Children: An Approach Based on Natural Language Generation

    Science.gov (United States)

    Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine

    2009-01-01

    We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…

  10. Learning homophones in context: Easy cases are favored in the lexicon of natural languages.

    Science.gov (United States)

    Dautriche, Isabelle; Fibla, Laia; Fievet, Anne-Caroline; Christophe, Anne

    2018-08-01

    Even though ambiguous words are common in languages, children find it hard to learn homophones, where a single label applies to several distinct meanings (e.g., Mazzocco, 1997). The present work addresses this apparent discrepancy between learning abilities and typological pattern, with respect to homophony in the lexicon. In a series of five experiments, 20-month-old French children easily learnt a pair of homophones if the two meanings associated with the phonological form belonged to different syntactic categories, or to different semantic categories. However, toddlers failed to learn homophones when the two meanings were distinguished only by different grammatical genders. In parallel, we analyzed the lexicon of four languages, Dutch, English, French and German, and observed that homophones are distributed non-arbitrarily in the lexicon, such that easily learnable homophones are more frequent than hard-to-learn ones: pairs of homophones are preferentially distributed across syntactic and semantic categories, but not across grammatical gender. We show that learning homophones is easier than previously thought, at least when the meanings of the same phonological form are made sufficiently distinct by their syntactic or semantic context. Following this, we propose that this learnability advantage translates into the overall structure of the lexicon, i.e., the kinds of homophones present in languages exhibit the properties that make them learnable by toddlers, thus allowing them to remain in languages. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Implementation of Danish in the Natural Language Generator of Angus2

    DEFF Research Database (Denmark)

    Larsen, Søren Støvelbæk; Fihl, Preben; Moeslund, Thomas B.

    The purpose of this technical report is to cover the implementation of the Danish language and grammar in the Angus2 software. This includes a brief description of the Angus2 software, and the Danish grammar with relevance to the implementation in Angus2, and detailed description of how...

  12. Real versus template-based Natural Language Generation: a false opposition?

    NARCIS (Netherlands)

    van Deemter, Kees; Krahmer, Emiel; Theune, Mariet

    2005-01-01

    This paper challenges the received wisdom that template-based approaches to the generation of language are necessarily inferior to other approaches as regards their maintainability, linguistic well-foundedness and quality of output. Some recent NLG systems that call themselves `templatebased' will

  13. INTEGRATING CORPUS-BASED RESOURCES AND NATURAL LANGUAGE PROCESSING TOOLS INTO CALL

    Directory of Open Access Journals (Sweden)

    Pascual Cantos Gomez

    2002-06-01

    Full Text Available This paper ainis at presenting a survey of computational linguistic tools presently available but whose potential has been neither fully considered not exploited to its full in modern CALL. It starts with a discussion on the rationale of DDL to language learning, presenting typical DDL-activities. DDL-software and potential extensions of non-typical DDL-software (electronic dictionaries and electronic dictionary facilities to DDL . An extended section is devoted to describe NLP-technology and how it can be integrated into CALL, within already existing software or as stand alone resources. A range of NLP-tools is presentcd (MT programs, taggers, lemn~atizersp, arsers and speech technologies with special emphasis on tagged concordancing. The paper finishes with a number of reflections and ideas on how language technologies can be used efficiently within the language learning context and how extensive exploration and integration of these technologies might change and extend both modern CAI,I, and the present language learning paradigiii..

  14. The Sentence Fairy: A Natural-Language Generation System to Support Children's Essay Writing

    Science.gov (United States)

    Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine

    2008-01-01

    We built an NLP system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary texts produced by pupils…

  15. School Meaning Systems: The Symbiotic Nature of Culture and "Language-In-Use"

    Science.gov (United States)

    Abawi, Lindy

    2013-01-01

    Recent research has produced evidence to suggest a strong reciprocal link between school context-specific language constructions that reflect a school's vision and schoolwide pedagogy, and the way that meaning making occurs, and a school's culture is characterized. This research was conducted within three diverse settings: one school in the Sydney…

  16. Genetic and Environmental Links between Natural Language Use and Cognitive Ability in Toddlers

    Science.gov (United States)

    Canfield, Caitlin F.; Edelson, Lisa R.; Saudino, Kimberly J.

    2017-01-01

    Although the phenotypic correlation between language and nonverbal cognitive ability is well-documented, studies examining the etiology of the covariance between these abilities are scant, particularly in very young children. The goal of this study was to address this gap in the literature by examining the genetic and environmental links between…

  17. Reliability and Validation of the International Consultation on Incontinence Questionnaire in Over Active Bladder to Persian Language.

    Science.gov (United States)

    Sari Motlagh, Reza; Hajebrahimi, Sakineh; Sadeghi-Bazargani, Homayoun; Joodi Tutunsaz, Javad

    2015-05-01

    Overactive bladder syndrome is a common syndrome in the world in both men and women. Correct diagnosis and accurate measurement of symptoms severity and also quality of life of patients is necessary to ensure proper treatment and to facilitate sound relationships among patients, researchers and doctors. The International Consultation on Incontinence Questionnaire in Over Active Bladder (ICIQ-OAB) questionnaire is a concise and strong tool to evaluate the symptoms of OAB and their effects on patients' quality of life and treatment results. The objective of this study was to translate and validate a simple and strong tool that could be used in clinics and research. First, the original British English questionnaire was translated into Persian by two bilingual and originally Persian-speaking translators. Then the Persian version was back translated to English and a native English speaker studied and compared the questionnaire with the original version. At the end, the translated and corrected Persian version was finalized by a research team. Content validity of the items and ensuring that the questions could convey the main concept to readers was assessed through Modified Content Validity Index (MCVI). Reliability was calculated by Cronbach's α coefficient. Internal Consistency of the questionnaire with the calculation of Kendall correlation coefficient were evaluated by performing test-retest in 50 participants. The modified content validity index was > 0.78 for all of the questions. Cronbach's α coefficient was calculated 0.76 for all of the participants. Kendall correlation coefficient was calculated for test-re-test assessment 0.66. Both of which indicates the reliability of this questionnaire. Persian version of ICIQ-OAB questionnaire is a simple and strong tool for research, treatment and screening purposes. © 2014 Wiley Publishing Asia Pty Ltd.

  18. Detecting Novel and Emerging Drug Terms Using Natural Language Processing: A Social Media Corpus Study.

    Science.gov (United States)

    Simpson, Sean S; Adams, Nikki; Brugman, Claudia M; Conners, Thomas J

    2018-01-08

    With the rapid development of new psychoactive substances (NPS) and changes in the use of more traditional drugs, it is increasingly difficult for researchers and public health practitioners to keep up with emerging drugs and drug terms. Substance use surveys and diagnostic tools need to be able to ask about substances using the terms that drug users themselves are likely to be using. Analyses of social media may offer new ways for researchers to uncover and track changes in drug terms in near real time. This study describes the initial results from an innovative collaboration between substance use epidemiologists and linguistic scientists employing techniques from the field of natural language processing to examine drug-related terms in a sample of tweets from the United States. The objective of this study was to assess the feasibility of using distributed word-vector embeddings trained on social media data to uncover previously unknown (to researchers) drug terms. In this pilot study, we trained a continuous bag of words (CBOW) model of distributed word-vector embeddings on a Twitter dataset collected during July 2016 (roughly 884.2 million tokens). We queried the trained word embeddings for terms with high cosine similarity (a proxy for semantic relatedness) to well-known slang terms for marijuana to produce a list of candidate terms likely to function as slang terms for this substance. This candidate list was then compared with an expert-generated list of marijuana terms to assess the accuracy and efficacy of using word-vector embeddings to search for novel drug terminology. The method described here produced a list of 200 candidate terms for the target substance (marijuana). Of these 200 candidates, 115 were determined to in fact relate to marijuana (65 terms for the substance itself, 50 terms related to paraphernalia). This included 30 terms which were used to refer to the target substance in the corpus yet did not appear on the expert-generated list and were

  19. On the relation between dependency distance, crossing dependencies, and parsing. Comment on "Dependency distance: a new perspective on syntactic patterns in natural languages" by Haitao Liu et al.

    Science.gov (United States)

    Gómez-Rodríguez, Carlos

    2017-07-01

    Liu et al. [1] provide a comprehensive account of research on dependency distance in human languages. While the article is a very rich and useful report on this complex subject, here I will expand on a few specific issues where research in computational linguistics (specifically natural language processing) can inform DDM research, and vice versa. These aspects have not been explored much in [1] or elsewhere, probably due to the little overlap between both research communities, but they may provide interesting insights for improving our understanding of the evolution of human languages, the mechanisms by which the brain processes and understands language, and the construction of effective computer systems to achieve this goal.

  20. Context Analysis of Customer Requests using a Hybrid Adaptive Neuro Fuzzy Inference System and Hidden Markov Models in the Natural Language Call Routing Problem

    Science.gov (United States)

    Rustamov, Samir; Mustafayev, Elshan; Clements, Mark A.

    2018-04-01

    The context analysis of customer requests in a natural language call routing problem is investigated in the paper. One of the most significant problems in natural language call routing is a comprehension of client request. With the aim of finding a solution to this issue, the Hybrid HMM and ANFIS models become a subject to an examination. Combining different types of models (ANFIS and HMM) can prevent misunderstanding by the system for identification of user intention in dialogue system. Based on these models, the hybrid system may be employed in various language and call routing domains due to nonusage of lexical or syntactic analysis in classification process.

  1. Context Analysis of Customer Requests using a Hybrid Adaptive Neuro Fuzzy Inference System and Hidden Markov Models in the Natural Language Call Routing Problem

    Directory of Open Access Journals (Sweden)

    Rustamov Samir

    2018-04-01

    Full Text Available The context analysis of customer requests in a natural language call routing problem is investigated in the paper. One of the most significant problems in natural language call routing is a comprehension of client request. With the aim of finding a solution to this issue, the Hybrid HMM and ANFIS models become a subject to an examination. Combining different types of models (ANFIS and HMM can prevent misunderstanding by the system for identification of user intention in dialogue system. Based on these models, the hybrid system may be employed in various language and call routing domains due to nonusage of lexical or syntactic analysis in classification process.

  2. A natural language query system for Hubble Space Telescope proposal selection

    Science.gov (United States)

    Hornick, Thomas; Cohen, William; Miller, Glenn

    1987-01-01

    The proposal selection process for the Hubble Space Telescope is assisted by a robust and easy to use query program (TACOS). The system parses an English subset language sentence regardless of the order of the keyword phases, allowing the user a greater flexibility than a standard command query language. Capabilities for macro and procedure definition are also integrated. The system was designed for flexibility in both use and maintenance. In addition, TACOS can be applied to any knowledge domain that can be expressed in terms of a single reaction. The system was implemented mostly in Common LISP. The TACOS design is described in detail, with particular attention given to the implementation methods of sentence processing.

  3. Natural Language Processing Systems Evaluation Workshop Held in Berkely, California on 18 June 1991

    Science.gov (United States)

    1991-12-01

    re~arded as -a fairly complete dictionary contains about 18,000 itemsw at soluition to the domain-restricted task at tzanlating present, and will be... dictionary access and so on, with an article. Unfortunately, the Weidner system did but as time goes on, one might imagine functionality not know that...superfast type. looped tht it A31l be built with taste by peo. writer ought to be possible in the monolingual case pie who understand languages and

  4. FMS: A Format Manipulation System for Automatic Production of Natural Language Documents, Second Edition. Final Report.

    Science.gov (United States)

    Silver, Steven S.

    FMS/3 is a system for producing hard copy documentation at high speed from free format text and command input. The system was originally written in assembler language for a 12K IBM 360 model 20 using a high speed 1403 printer with the UCS-TN chain option (upper and lower case). Input was from an IBM 2560 Multi-function Card Machine. The model 20…

  5. Descriptive Metaphysics, Natural Language Metaphysics, Sapir-Whorf, and All That Stuff: Evidence from the Mass-Count Distinction

    Directory of Open Access Journals (Sweden)

    Francis Jeffry Pelletier

    2010-12-01

    Full Text Available Strawson (1959 described ‘descriptive metaphysics’, Bach (1986a described ‘natural language metaphysics’, Sapir (1929 and Whorf (1940a,b, 1941 describe, well, Sapir-Whorfianism. And there are other views concerning the relation between correct semantic analysis of linguistic phenomena and the “reality” that is supposed to be thereby described. I think some considerations from the analyses of the mass-count distinction can shed some light on that very dark topic.ReferencesBach, Emmon. 1986a. ‘Natural Language Metaphysics’. In Ruth Barcan Marcus, G.J.W. Dorn & Paul Weingartner (eds. ‘Logic, Methodology, and Philosophy of Science, VII’, 573–595. Amsterdam: North Holland.Bach, Emmon. 1986b. ‘The Algebra of Events’. Linguistics and Philosophy 9: 5–16.Berger, Peter & Luckmann, Thomas. 1966. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. New York: Doubleday.Boroditsky, Lera, Schmidt, Lauren & Phillips, Webb. 2003. ‘Sex, Syntax, and Semantics’. In Dedre Gentner & Susan Goldin-Meadow (eds. ‘Language in Mind: Advances in the Study of Language and Cognition’, 59–80. Cambridge, MA: MIT Press.Cheng, L. & Sybesma, R. 1999. ‘Bare and Not-So-Bare Nouns and the structure of NP’. Linguistic Inquiry 30: 509–542.http://dx.doi.org/10.1162/002438999554192Chierchia, Gennaro. 1998a. ‘Reference to Kinds across Languages’. Natural Language Semantics 6: 339–405.http://dx.doi.org/10.1023/A:1008324218506Chierchia, Gennaro. 1998b. ‘Plurality of Mass Nouns and the Notion of ‘Semantic Parameter’ ’. In S. Rothstein (ed. ‘Events and Grammar’, 53–103. Dordrecht: Kluwer.Chierchia, Gennaro. 2010. ‘Mass Nouns, Vagueness and Semantic Variation’. Synthèse 174: 99–149.http://dx.doi.org/10.1007/s11229-009-9686-6Doetjes, Jenny. 1997. Quantifiers and Selection: On the Distribution of Quantifying Expressions in French, Dutch and English. Ph.D. thesis, University of Leiden, Holland

  6. Treating conduct disorder: An effectiveness and natural language analysis study of a new family-centred intervention program.

    Science.gov (United States)

    Stevens, Kimberly A; Ronan, Prof Kevin; Davies, Gene

    2017-05-01

    This paper reports on a new family-centred, feedback-informed intervention focused on evaluating therapeutic outcomes and language changes across treatment for conduct disorder (CD). The study included 26 youth and families from a larger randomised, controlled trial (Ronan et al., in preparation). Outcome measures reflected family functioning/youth compliance, delinquency, and family goal attainment. First- and last-treatment session audio files were transcribed into more than 286,000 words and evaluated through the Linguistic Inquiry and Word Count Analysis program (Pennebaker et al., 2007). Significant outcomes across family functioning/youth compliance, delinquency, goal attainment and word usage reflected moderate-strong effect sizes. Benchmarking findings also revealed reduced time of treatment delivery compared to a gold standard approach. Linguistic analysis revealed specific language changes across treatment. For caregivers, increased first person, action-oriented, present tense, and assent type words and decreased sadness words were found; for youth, significant reduction in use of leisure words. This study is the first using lexical analyses of natural language to assess change across treatment for conduct disordered youth and families. Such findings provided strong support for program tenets; others, more speculative support. Copyright © 2016. Published by Elsevier B.V.

  7. The dynamic nature of motivation in language learning: A classroom perspective

    Directory of Open Access Journals (Sweden)

    Mirosław Pawlak

    2012-10-01

    Full Text Available When we examine the empirical investigations of motivation in second and foreign language learning, even those drawing upon the latest theoretical paradigms, such as the L2 motivational self system (Dörnyei, 2009, it becomes clear that many of them still fail to take account of its dynamic character and temporal variation. This may be surprising in view of the fact that the need to adopt such a process-oriented approach has been emphasized by a number of theorists and researchers (e.g., Dörnyei, 2000, 2001, 2009; Ushioda, 1996; Williams & Burden, 1997, and it lies at the heart of the model of second language motivation proposed by Dörnyei and Ottó (1998. It is also unfortunate that few research projects have addressed the question of how motivation changes during a language lesson as well as a series of lessons, and what factors might be responsible for fluctuations of this kind. The present paper is aimed to rectify this problem by reporting the findings of a classroom-based study which investigated the changes in the motivation of 28 senior high school students, both in terms of their goals and intentions, and their interest and engagement in classroom activities and tasks over the period of four weeks. The analysis of the data collected by means of questionnaires, observations and interviews showed that although the reasons for learning remain relatively stable, the intensity of motivation is indeed subject to variation on a minute-to-minute basis and this fact has to be recognized even in large-scale, cross-sectional research in this area.

  8. im4Things: An Ontology-Based Natural Language Interface for Controlling Devices in the Internet of Things

    KAUST Repository

    Noguera-Arnaldos, José Ángel

    2017-03-14

    The Internet of Things (IoT) offers opportunities for new applications and services that enable users to access and control their working and home environment from local and remote locations, aiming to perform daily life activities in an easy way. However, the IoT also introduces new challenges, some of which arise from the large range of devices currently available and the heterogeneous interfaces provided for their control. The control and management of this variety of devices and interfaces represent a new challenge for non-expert users, instead of making their life easier. Based on this understanding, in this work we present a natural language interface for the IoT, which takes advantage of Semantic Web technologies to allow non-expert users to control their home environment through an instant messaging application in an easy and intuitive way. We conducted several experiments with a group of end users aiming to evaluate the effectiveness of our approach to control home appliances by means of natural language instructions. The evaluation results proved that without the need for technicalities, the user was able to control the home appliances in an efficient way.

  9. Dual Sticky Hierarchical Dirichlet Process Hidden Markov Model and Its Application to Natural Language Description of Motions.

    Science.gov (United States)

    Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen

    2017-09-25

    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.

  10. A natural language-based presentation of cognitive stimulation to people with dementia in assistive technology: A pilot study.

    Science.gov (United States)

    Dethlefs, Nina; Milders, Maarten; Cuayáhuitl, Heriberto; Al-Salkini, Turkey; Douglas, Lorraine

    2017-12-01

    Currently, an estimated 36 million people worldwide are affected by Alzheimer's disease or related dementias. In the absence of a cure, non-pharmacological interventions, such as cognitive stimulation, which slow down the rate of deterioration can benefit people with dementia and their caregivers. Such interventions have shown to improve well-being and slow down the rate of cognitive decline. It has further been shown that cognitive stimulation in interaction with a computer is as effective as with a human. However, the need to operate a computer often represents a difficulty for the elderly and stands in the way of widespread adoption. A possible solution to this obstacle is to provide a spoken natural language interface that allows people with dementia to interact with the cognitive stimulation software in the same way as they would interact with a human caregiver. This makes the assistive technology accessible to users regardless of their technical skills and provides a fully intuitive user experience. This article describes a pilot study that evaluated the feasibility of computer-based cognitive stimulation through a spoken natural language interface. Prototype software was evaluated with 23 users, including healthy elderly people and people with dementia. Feedback was overwhelmingly positive.

  11. On the nature and evolution of the neural bases of human language

    Science.gov (United States)

    Lieberman, Philip

    2002-01-01

    The traditional theory equating the brain bases of language with Broca's and Wernicke's neocortical areas is wrong. Neural circuits linking activity in anatomically segregated populations of neurons in subcortical structures and the neocortex throughout the human brain regulate complex behaviors such as walking, talking, and comprehending the meaning of sentences. When we hear or read a word, neural structures involved in the perception or real-world associations of the word are activated as well as posterior cortical regions adjacent to Wernicke's area. Many areas of the neocortex and subcortical structures support the cortical-striatal-cortical circuits that confer complex syntactic ability, speech production, and a large vocabulary. However, many of these structures also form part of the neural circuits regulating other aspects of behavior. For example, the basal ganglia, which regulate motor control, are also crucial elements in the circuits that confer human linguistic ability and abstract reasoning. The cerebellum, traditionally associated with motor control, is active in motor learning. The basal ganglia are also key elements in reward-based learning. Data from studies of Broca's aphasia, Parkinson's disease, hypoxia, focal brain damage, and a genetically transmitted brain anomaly (the putative "language gene," family KE), and from comparative studies of the brains and behavior of other species, demonstrate that the basal ganglia sequence the discrete elements that constitute a complete motor act, syntactic process, or thought process. Imaging studies of intact human subjects and electrophysiologic and tracer studies of the brains and behavior of other species confirm these findings. As Dobzansky put it, "Nothing in biology makes sense except in the light of evolution" (cited in Mayr, 1982). That applies with as much force to the human brain and the neural bases of language as it does to the human foot or jaw. The converse follows: the mark of evolution on

  12. Knowledge-Based Natural Language Understanding: A AAAI-87 Survey Talk

    Science.gov (United States)

    1991-01-01

    easily transformed into a regrettable mistake (don’t cry over spilt milk ) if G is not characterized as a fleeting goal and a recovery plan therefore...technical literature is characterized by very dry and literal language. If there is one place where metaphors might not intrude, it must be when people...from the point of view of both evidential support and falsification ? I ask it because you didn’t say anything about it. A: Well, I think there’s a lot

  13. Natural Language Processing Based Instrument for Classification of Free Text Medical Records

    Directory of Open Access Journals (Sweden)

    Manana Khachidze

    2016-01-01

    Full Text Available According to the Ministry of Labor, Health and Social Affairs of Georgia a new health management system has to be introduced in the nearest future. In this context arises the problem of structuring and classifying documents containing all the history of medical services provided. The present work introduces the instrument for classification of medical records based on the Georgian language. It is the first attempt of such classification of the Georgian language based medical records. On the whole 24.855 examination records have been studied. The documents were classified into three main groups (ultrasonography, endoscopy, and X-ray and 13 subgroups using two well-known methods: Support Vector Machine (SVM and K-Nearest Neighbor (KNN. The results obtained demonstrated that both machine learning methods performed successfully, with a little supremacy of SVM. In the process of classification a “shrink” method, based on features selection, was introduced and applied. At the first stage of classification the results of the “shrink” case were better; however, on the second stage of classification into subclasses 23% of all documents could not be linked to only one definite individual subclass (liver or binary system due to common features characterizing these subclasses. The overall results of the study were successful.

  14. A Requirements-Based Exploration of Open-Source Software Development Projects--Towards a Natural Language Processing Software Analysis Framework

    Science.gov (United States)

    Vlas, Radu Eduard

    2012-01-01

    Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects,…

  15. PREDICATE OF ‘MANGAN’ IN SASAK LANGUAGE: A STUDY OF NATURAL SEMANTIC METALANGUAGE

    Directory of Open Access Journals (Sweden)

    Sarwadi

    2016-11-01

    Full Text Available The aim of this study were to know semantic meaning of predicate Ngajengan, Daharan, Ngelor, Mangan, Ngrodok (Eating, Kaken (Eating, Suap, Bejijit, (Eating Bekeruak (Eating, Ngerasak (Eating and Nyangklok (Eating. Besides that, to know the lexical meaning of each words and the function of words in every sentences especially the meaning of eating in Sasaknese language. The lexical meaning of Ngajengan, Daharan, Ngelor, Mangan, Ngrodok (Eating, Kaken (Eating, Suap, Bejijit, (Eating Bekeruak (Eating, Ngerasak (Eating and Nyangklok (Eating was doing something to eat but the differences of these words are usage in sentences. Besides that, the word usage based on the subject and object and there is predicate that need tool to state eat meals or food.

  16. The development of a natural language interface to a geographical information system

    Science.gov (United States)

    Toledo, Sue Walker; Davis, Bruce

    1993-01-01

    This paper will discuss a two and a half year long project undertaken to develop an English-language interface for the geographical information system GRASS. The work was carried out for NASA by a small business, Netrologic, based in San Diego, California, under Phase 1 and 2 Small Business Innovative Research contracts. We consider here the potential value of this system whose current functionality addresses numerical, categorical and boolean raster layers and includes the display of point sets defined by constraints on one or more layers, answers yes/no and numerical questions, and creates statistical reports. It also handles complex queries and lexical ambiguities, and allows temporarily switching to UNIX or GRASS.

  17. Computer simulation as an important approach to explore language universal. Comment on "Dependency distance: a new perspective on syntactic patterns in natural languages" by Haitao Liu et al.

    Science.gov (United States)

    Lu, Qian

    2017-07-01

    Exploring language universal is one of the major goals of linguistic researches, which are largely devoted to answering the ;Platonic questions; in linguistics, that is, what is the language knowledge, how to get and use this knowledge. However, if solely guided by linguistic intuition, it is very difficult for syntactic studies to answer these questions, or to achieve abstractions in the scientific sense. This suggests that linguistic analyses based on the probability theory may provide effective ways to investigate into language universals in terms of biological motivations or cognitive psychological mechanisms. With the view that ;Language is a human-driven system;, Liu, Xu & Liang's review [1] pointed out that dependency distance minimization (DDM), which has been corroborated by big data analysis of corpus, may be a language universal shaped in language evolution, a universal that has profound effect on syntactic patterns.

  18. Automatic Determination of the Need for Intravenous Contrast in Musculoskeletal MRI Examinations Using IBM Watson's Natural Language Processing Algorithm.

    Science.gov (United States)

    Trivedi, Hari; Mesterhazy, Joseph; Laguna, Benjamin; Vu, Thienkhai; Sohn, Jae Ho

    2018-04-01

    Magnetic resonance imaging (MRI) protocoling can be time- and resource-intensive, and protocols can often be suboptimal dependent upon the expertise or preferences of the protocoling radiologist. Providing a best-practice recommendation for an MRI protocol has the potential to improve efficiency and decrease the likelihood of a suboptimal or erroneous study. The goal of this study was to develop and validate a machine learning-based natural language classifier that can automatically assign the use of intravenous contrast for musculoskeletal MRI protocols based upon the free-text clinical indication of the study, thereby improving efficiency of the protocoling radiologist and potentially decreasing errors. We utilized a deep learning-based natural language classification system from IBM Watson, a question-answering supercomputer that gained fame after challenging the best human players on Jeopardy! in 2011. We compared this solution to a series of traditional machine learning-based natural language processing techniques that utilize a term-document frequency matrix. Each classifier was trained with 1240 MRI protocols plus their respective clinical indications and validated with a test set of 280. Ground truth of contrast assignment was obtained from the clinical record. For evaluation of inter-reader agreement, a blinded second reader radiologist analyzed all cases and determined contrast assignment based on only the free-text clinical indication. In the test set, Watson demonstrated overall accuracy of 83.2% when compared to the original protocol. This was similar to the overall accuracy of 80.2% achieved by an ensemble of eight traditional machine learning algorithms based on a term-document matrix. When compared to the second reader's contrast assignment, Watson achieved 88.6% agreement. When evaluating only the subset of cases where the original protocol and second reader were concordant (n = 251), agreement climbed further to 90.0%. The classifier was

  19. Performance analysis of CRF-based learning for processing WoT application requests expressed in natural language.

    Science.gov (United States)

    Yoon, Young

    2016-01-01

    In this paper, we investigate the effectiveness of a CRF-based learning method for identifying necessary Web of Things (WoT) application components that would satisfy the users' requests issued in natural language. For instance, a user request such as "archive all sports breaking news" can be satisfied by composing a WoT application that consists of ESPN breaking news service and Dropbox as a storage service. We built an engine that can identify the necessary application components by recognizing a main act (MA) or named entities (NEs) from a given request. We trained this engine with the descriptions of WoT applications (called recipes) that were collected from IFTTT WoT platform. IFTTT hosts over 300 WoT entities that offer thousands of functions referred to as triggers and actions. There are more than 270,000 publicly-available recipes composed with those functions by real users. Therefore, the set of these recipes is well-qualified for the training of our MA and NE recognition engine. We share our unique experience of generating the training and test set from these recipe descriptions and assess the performance of the CRF-based language method. Based on the performance evaluation, we introduce further research directions.

  20. Building a Natural Language Processing Tool to Identify Patients With High Clinical Suspicion for Kawasaki Disease from Emergency Department Notes.

    Science.gov (United States)

    Doan, Son; Maehara, Cleo K; Chaparro, Juan D; Lu, Sisi; Liu, Ruiling; Graham, Amanda; Berry, Erika; Hsu, Chun-Nan; Kanegaye, John T; Lloyd, David D; Ohno-Machado, Lucila; Burns, Jane C; Tremoulet, Adriana H

    2016-05-01

    Delayed diagnosis of Kawasaki disease (KD) may lead to serious cardiac complications. We sought to create and test the performance of a natural language processing (NLP) tool, the KD-NLP, in the identification of emergency department (ED) patients for whom the diagnosis of KD should be considered. We developed an NLP tool that recognizes the KD diagnostic criteria based on standard clinical terms and medical word usage using 22 pediatric ED notes augmented by Unified Medical Language System vocabulary. With high suspicion for KD defined as fever and three or more KD clinical signs, KD-NLP was applied to 253 ED notes from children ultimately diagnosed with either KD or another febrile illness. We evaluated KD-NLP performance against ED notes manually reviewed by clinicians and compared the results to a simple keyword search. KD-NLP identified high-suspicion patients with a sensitivity of 93.6% and specificity of 77.5% compared to notes manually reviewed by clinicians. The tool outperformed a simple keyword search (sensitivity = 41.0%; specificity = 76.3%). KD-NLP showed comparable performance to clinician manual chart review for identification of pediatric ED patients with a high suspicion for KD. This tool could be incorporated into the ED electronic health record system to alert providers to consider the diagnosis of KD. KD-NLP could serve as a model for decision support for other conditions in the ED. © 2016 by the Society for Academic Emergency Medicine.

  1. Procedures of amino acid sequencing of peptides in natural proteins collection of knowledge and intelligence for construction of reliable chemical inference system

    OpenAIRE

    Kudo, Yoshihiro; Kanaya, Shigehiko

    1994-01-01

    In order to establish a reliable chemical inference system on amino acid sequencing of natural peptides, as various kinds of relevant knowledge and intelligence as possible are collected. Topics are on didemnins, dolastatin 3, TL-119 and/or A-3302-B, mycosubtilin, patellamide A, duramycin (and cinnamycin), bottoromycin A 2, A19009, galantin I, vancomycin, stenothricin, calf speleen profilin, neocarzinostatin, pancreatic spasmolytic polypeptide, cerebratulus toxin B-IV, RNAase U 2, ferredoxin ...

  2. Redefining reliability

    International Nuclear Information System (INIS)

    Paulson, S.L.

    1995-01-01

    Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company's experiment in direct marketing of natural gas

  3. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  4. Analyzing discourse and text complexity for learning and collaborating a cognitive approach based on natural language processing

    CERN Document Server

    Dascălu, Mihai

    2014-01-01

    With the advent and increasing popularity of Computer Supported Collaborative Learning (CSCL) and e-learning technologies, the need of automatic assessment and of teacher/tutor support for the two tightly intertwined activities of comprehension of reading materials and of collaboration among peers has grown significantly. In this context, a polyphonic model of discourse derived from Bakhtin’s work as a paradigm is used for analyzing both general texts and CSCL conversations in a unique framework focused on different facets of textual cohesion. As specificity of our analysis, the individual learning perspective is focused on the identification of reading strategies and on providing a multi-dimensional textual complexity model, whereas the collaborative learning dimension is centered on the evaluation of participants’ involvement, as well as on collaboration assessment. Our approach based on advanced Natural Language Processing techniques provides a qualitative estimation of the learning process and enhance...

  5. Automated Assessment of Patients' Self-Narratives for Posttraumatic Stress Disorder Screening Using Natural Language Processing and Text Mining.

    Science.gov (United States)

    He, Qiwei; Veldkamp, Bernard P; Glas, Cees A W; de Vries, Theo

    2017-03-01

    Patients' narratives about traumatic experiences and symptoms are useful in clinical screening and diagnostic procedures. In this study, we presented an automated assessment system to screen patients for posttraumatic stress disorder via a natural language processing and text-mining approach. Four machine-learning algorithms-including decision tree, naive Bayes, support vector machine, and an alternative classification approach called the product score model-were used in combination with n-gram representation models to identify patterns between verbal features in self-narratives and psychiatric diagnoses. With our sample, the product score model with unigrams attained the highest prediction accuracy when compared with practitioners' diagnoses. The addition of multigrams contributed most to balancing the metrics of sensitivity and specificity. This article also demonstrates that text mining is a promising approach for analyzing patients' self-expression behavior, thus helping clinicians identify potential patients from an early stage.

  6. Computer based extraction of phenoptypic features of human congenital anomalies from the digital literature with natural language processing techniques.

    Science.gov (United States)

    Karakülah, Gökhan; Dicle, Oğuz; Koşaner, Ozgün; Suner, Aslı; Birant, Çağdaş Can; Berber, Tolga; Canbek, Sezin

    2014-01-01

    The lack of laboratory tests for the diagnosis of most of the congenital anomalies renders the physical examination of the case crucial for the diagnosis of the anomaly; and the cases in the diagnostic phase are mostly being evaluated in the light of the literature knowledge. In this respect, for accurate diagnosis, ,it is of great importance to provide the decision maker with decision support by presenting the literature knowledge about a particular case. Here, we demonstrated a methodology for automated scanning and determining of the phenotypic features from the case reports related to congenital anomalies in the literature with text and natural language processing methods, and we created a framework of an information source for a potential diagnostic decision support system for congenital anomalies.

  7. Reproducibility in Natural Language Processing: A Case Study of Two R Libraries for Mining PubMed/MEDLINE

    Science.gov (United States)

    Cohen, K. Bretonnel; Xia, Jingbo; Roeder, Christophe; Hunter, Lawrence E.

    2018-01-01

    There is currently a crisis in science related to highly publicized failures to reproduce large numbers of published studies. The current work proposes, by way of case studies, a methodology for moving the study of reproducibility in computational work to a full stage beyond that of earlier work. Specifically, it presents a case study in attempting to reproduce the reports of two R libraries for doing text mining of the PubMed/MEDLINE repository of scientific publications. The main findings are that a rational paradigm for reproduction of natural language processing papers can be established; the advertised functionality was difficult, but not impossible, to reproduce; and reproducibility studies can produce additional insights into the functioning of the published system. Additionally, the work on reproducibility lead to the production of novel user-centered documentation that has been accessed 260 times since its publication—an average of once a day per library. PMID:29568821

  8. Reliability and validity of the Spanish Language Wechsler Adult Intelligence Scale (3rd Edition) in a sample of American, urban, Spanish-speaking Hispanics.

    Science.gov (United States)

    Renteria, Laura; Li, Susan Tinsley; Pliskin, Neil H

    2008-05-01

    The utility of the Spanish WAIS-III was investigated by examining its reliability and validity among 100 Spanish-speaking participants. Results indicated that the internal consistency of the subtests was satisfactory, but inadequate for Letter Number Sequencing. Criterion validity was adequate. Convergent and discriminant validity results were generally similar to the North American normative sample. Paired sample t-tests suggested that the WAIS-III may underestimate ability when compared to the criterion measures that were utilized to assess validity. This study provides support for the use of the Spanish WAIS-III in urban Hispanic populations, but also suggests that caution be used when administering specific subtests, due to the nature of the Latin America alphabet and potential test bias.

  9. Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.

    Science.gov (United States)

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2016-02-01

    Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.

  10. What you say is not what you get: arguing for artificial languages instead of natural languages in human robot speech interaction

    NARCIS (Netherlands)

    Mubin, O.; Bartneck, C.; Feijs, L.M.G.

    2009-01-01

    The project described hereunder focuses on the design and implementation of a "Artificial Robotic Interaction Language", where the research goal is to find a balance between the effort necessary from the user to learn a new language and the resulting benefit of optimized automatic speech recognition

  11. From telegraphic to natural language: an expansion system in a pictogrambased AAC application

    OpenAIRE

    Pahisa Solé, Joan

    2017-01-01

    En aquesta tesi doctoral, presentem un sistema de compansió que transforma el llenguatge telegràfic (frases formades per paraules de contingut no flexionades), derivat de la comunicació augmentativa i alternativa (CAA) basada en pictogrames, a llenguatge natural en català i en castellà. El sistema ha sigut dissenyat per millorar la comunicació de persones usuàries de CAA que habitualment tenen greus problemes a la parla, així com problemes motrius, i que utilitzen mètodes de comunicació basat...

  12. Natural Language-based Machine Learning Models for the Annotation of Clinical Radiology Reports.

    Science.gov (United States)

    Zech, John; Pain, Margaret; Titano, Joseph; Badgeley, Marcus; Schefflein, Javin; Su, Andres; Costa, Anthony; Bederson, Joshua; Lehar, Joseph; Oermann, Eric Karl

    2018-05-01

    critical finding of 0.951 for unigram BOW versus 0.966 for the best-performing model. The Yule I of the head CT corpus was 34, markedly lower than that of the Reuters corpus (at 103) or I2B2 discharge summaries (at 271), indicating lower linguistic complexity. Conclusion Automated methods can be used to identify findings in radiology reports. The success of this approach benefits from the standardized language of these reports. With this method, a large labeled corpus can be generated for applications such as deep learning. © RSNA, 2018 Online supplemental material is available for this article.

  13. Dependency distance in language evolution. Comment on "Dependency distance: A new perspective on syntactic patterns in natural languages" by Haitao Liu et al.

    Science.gov (United States)

    Liu, Bingli; Chen, Xinying

    2017-07-01

    In the target article [1], Liu et al. provide an informative introduction to the dependency distance studies and proclaim that language syntactic patterns, that relate to the dependency distance, are associated with human cognitive mechanisms, such as limited working memory and syntax processing. Therefore, such syntactic patterns are probably 'human-driven' language universals. Sufficient evidence based on big data analysis is also given in the article for supporting this idea. The hypotheses generally seem very convincing yet still need further tests from various perspectives. Diachronic linguistic study based on authentic language data, on our opinion, can be one of those 'further tests'.

  14. Language and human nature: Kurt Goldstein's neurolinguistic foundation of a holistic philosophy.

    Science.gov (United States)

    Ludwig, David

    2012-01-01

    Holism in interwar Germany provides an excellent example for social and political influences on scientific developments. Deeply impressed by the ubiquitous invocation of a cultural crisis, biologists, physicians, and psychologists presented holistic accounts as an alternative to the "mechanistic worldview" of the nineteenth century. Although the ideological background of these accounts is often blatantly obvious, many holistic scientists did not content themselves with a general opposition to a mechanistic worldview but aimed at a rational foundation of their holistic projects. This article will discuss the work of Kurt Goldstein, who is known for both his groundbreaking contributions to neuropsychology and his holistic philosophy of human nature. By focusing on Goldstein's neurolinguistic research, I want to reconstruct the empirical foundations of his holistic program without ignoring its cultural background. In this sense, Goldstein's work provides a case study for the formation of a scientific theory through the complex interplay between specific empirical evidences and the general cultural developments of the Weimar Republic. © 2012 Wiley Periodicals, Inc.

  15. Evaluation of natural language processing from emergency department computerized medical records for intra-hospital syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Pagliaroli Véronique

    2011-07-01

    Full Text Available Abstract Background The identification of patients who pose an epidemic hazard when they are admitted to a health facility plays a role in preventing the risk of hospital acquired infection. An automated clinical decision support system to detect suspected cases, based on the principle of syndromic surveillance, is being developed at the University of Lyon's Hôpital de la Croix-Rousse. This tool will analyse structured data and narrative reports from computerized emergency department (ED medical records. The first step consists of developing an application (UrgIndex which automatically extracts and encodes information found in narrative reports. The purpose of the present article is to describe and evaluate this natural language processing system. Methods Narrative reports have to be pre-processed before utilizing the French-language medical multi-terminology indexer (ECMT for standardized encoding. UrgIndex identifies and excludes syntagmas containing a negation and replaces non-standard terms (abbreviations, acronyms, spelling errors.... Then, the phrases are sent to the ECMT through an Internet connection. The indexer's reply, based on Extensible Markup Language, returns codes and literals corresponding to the concepts found in phrases. UrgIndex filters codes corresponding to suspected infections. Recall is defined as the number of relevant processed medical concepts divided by the number of concepts evaluated (coded manually by the medical epidemiologist. Precision is defined as the number of relevant processed concepts divided by the number of concepts proposed by UrgIndex. Recall and precision were assessed for respiratory and cutaneous syndromes. Results Evaluation of 1,674 processed medical concepts contained in 100 ED medical records (50 for respiratory syndromes and 50 for cutaneous syndromes showed an overall recall of 85.8% (95% CI: 84.1-87.3. Recall varied from 84.5% for respiratory syndromes to 87.0% for cutaneous syndromes. The

  16. BIBLIOGRAPHY ON LANGUAGE DEVELOPMENT.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Graduate School of Education.

    THIS BIBLIOGRAPHY LISTS MATERIAL ON VARIOUS ASPECTS OF LANGUAGE DEVELOPMENT. APPROXIMATELY 65 UNANNOTATED REFERENCES ARE PROVIDED TO DOCUMENTS DATING FROM 1958 TO 1966. JOURNALS, BOOKS, AND REPORT MATERIALS ARE LISTED. SUBJECT AREAS INCLUDED ARE THE NATURE OF LANGUAGE, LINGUISTICS, LANGUAGE LEARNING, LANGUAGE SKILLS, LANGUAGE PATTERNS, AND…

  17. Linguistics in Language Education

    Science.gov (United States)

    Kumar, Rajesh; Yunus, Reva

    2014-01-01

    This article looks at the contribution of insights from theoretical linguistics to an understanding of language acquisition and the nature of language in terms of their potential benefit to language education. We examine the ideas of innateness and universal language faculty, as well as multilingualism and the language-society relationship. Modern…

  18. Natural language query system design for interactive information storage and retrieval systems. Presentation visuals. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Liu, I-Hsiung

    1985-01-01

    This Working Paper Series entry represents a collection of presentation visuals associated with the companion report entitled Natural Language Query System Design for Interactive Information Storage and Retrieval Systems, USL/DBMS NASA/RECON Working Paper Series report number DBMS.NASA/RECON-17.

  19. Development of a user-friendly interface for the searching of a data base in natural language while using concepts and means of artificial intelligence

    International Nuclear Information System (INIS)

    Pujo, Pascal

    1989-01-01

    This research thesis aimed at the development of a natural-language-based user-friendly interface for the searching of relational data bases. The author first addresses how to store data which will be accessible through an interface in natural language: this organisation must result in as few constraints as possible in query formulation. He briefly presents techniques related to the automatic processing of natural language, and highlights the need for a more user-friendly interface. Then, he presents the developed interface and outlines the user-friendliness and ergonomics of implemented procedures. He shows how the interface has been designed to deliver information and explanations on its processing. This allows the user to control the relevance of the answer. He also indicates the classification of mistakes and errors which may be present in queries in natural language. He finally gives an overview of possible evolutions of the interface, briefly presents deductive functionalities which could expand data management. The handling of complex objects is also addressed [fr

  20. Planned experiments and corpus based research play a complementary role. Comment on "Dependency distance: A new perspective on syntactic patterns in natural languages" by Haitao Liu et al.

    Science.gov (United States)

    Vasishth, Shravan

    2017-07-01

    This interesting and informative review by Liu and colleagues [17] in this issue covers the full spectrum of research on the idea that in natural language, dependency distance tends to be small. The authors discuss two distinct research threads: experimental work from psycholinguistics on online processes in comprehension and production, and text-corpus studies of dependency length distributions.

  1. Does it really matter whether students' contributions are spoken versus typed in an intelligent tutoring system with natural language?

    Science.gov (United States)

    D'Mello, Sidney K; Dowell, Nia; Graesser, Arthur

    2011-03-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.

  2. Combining natural language processing and network analysis to examine how advocacy organizations stimulate conversation on social media.

    Science.gov (United States)

    Bail, Christopher Andrew

    2016-10-18

    Social media sites are rapidly becoming one of the most important forums for public deliberation about advocacy issues. However, social scientists have not explained why some advocacy organizations produce social media messages that inspire far-ranging conversation among social media users, whereas the vast majority of them receive little or no attention. I argue that advocacy organizations are more likely to inspire comments from new social media audiences if they create "cultural bridges," or produce messages that combine conversational themes within an advocacy field that are seldom discussed together. I use natural language processing, network analysis, and a social media application to analyze how cultural bridges shaped public discourse about autism spectrum disorders on Facebook over the course of 1.5 years, controlling for various characteristics of advocacy organizations, their social media audiences, and the broader social context in which they interact. I show that organizations that create substantial cultural bridges provoke 2.52 times more comments about their messages from new social media users than those that do not, controlling for these factors. This study thus offers a theory of cultural messaging and public deliberation and computational techniques for text analysis and application-based survey research.

  3. Integrating natural language processing expertise with patient safety event review committees to improve the analysis of medication events.

    Science.gov (United States)

    Fong, Allan; Harriott, Nicole; Walters, Donna M; Foley, Hanan; Morrissey, Richard; Ratwani, Raj R

    2017-08-01

    Many healthcare providers have implemented patient safety event reporting systems to better understand and improve patient safety. Reviewing and analyzing these reports is often time consuming and resource intensive because of both the quantity of reports and length of free-text descriptions in the reports. Natural language processing (NLP) experts collaborated with clinical experts on a patient safety committee to assist in the identification and analysis of medication related patient safety events. Different NLP algorithmic approaches were developed to identify four types of medication related patient safety events and the models were compared. Well performing NLP models were generated to categorize medication related events into pharmacy delivery delays, dispensing errors, Pyxis discrepancies, and prescriber errors with receiver operating characteristic areas under the curve of 0.96, 0.87, 0.96, and 0.81 respectively. We also found that modeling the brief without the resolution text generally improved model performance. These models were integrated into a dashboard visualization to support the patient safety committee review process. We demonstrate the capabilities of various NLP models and the use of two text inclusion strategies at categorizing medication related patient safety events. The NLP models and visualization could be used to improve the efficiency of patient safety event data review and analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Informatics in radiology: RADTF: a semantic search-enabled, natural language processor-generated radiology teaching file.

    Science.gov (United States)

    Do, Bao H; Wu, Andrew; Biswal, Sandip; Kamaya, Aya; Rubin, Daniel L

    2010-11-01

    Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. ©RSNA, 2010

  5. Per-service supervised learning for identifying desired WoT apps from user requests in natural language.

    Directory of Open Access Journals (Sweden)

    Young Yoon

    Full Text Available Web of Things (WoT platforms are growing fast so as the needs for composing WoT apps more easily and efficiently. We have recently commenced the campaign to develop an interface where users can issue requests for WoT apps entirely in natural language. This requires an effort to build a system that can learn to identify relevant WoT functions that fulfill user's requests. In our preceding work, we trained a supervised learning system with thousands of publicly-available IFTTT app recipes based on conditional random fields (CRF. However, the sub-par accuracy and excessive training time motivated us to devise a better approach. In this paper, we present a novel solution that creates a separate learning engine for each trigger service. With this approach, parallel and incremental learning becomes possible. For inference, our system first identifies the most relevant trigger service for a given user request by using an information retrieval technique. Then, the learning engine associated with the trigger service predicts the most likely pair of trigger and action functions. We expect that such two-phase inference method given parallel learning engines would improve the accuracy of identifying related WoT functions. We verify our new solution through the empirical evaluation with training and test sets sampled from a pool of refined IFTTT app recipes. We also meticulously analyze the characteristics of the recipes to find future research directions.

  6. An Introduction to Natural Language Processing: How You Can Get More From Those Electronic Notes You Are Generating.

    Science.gov (United States)

    Kimia, Amir A; Savova, Guergana; Landschaft, Assaf; Harper, Marvin B

    2015-07-01

    Electronically stored clinical documents may contain both structured data and unstructured data. The use of structured clinical data varies by facility, but clinicians are familiar with coded data such as International Classification of Diseases, Ninth Revision, Systematized Nomenclature of Medicine-Clinical Terms codes, and commonly other data including patient chief complaints or laboratory results. Most electronic health records have much more clinical information stored as unstructured data, for example, clinical narrative such as history of present illness, procedure notes, and clinical decision making are stored as unstructured data. Despite the importance of this information, electronic capture or retrieval of unstructured clinical data has been challenging. The field of natural language processing (NLP) is undergoing rapid development, and existing tools can be successfully used for quality improvement, research, healthcare coding, and even billing compliance. In this brief review, we provide examples of successful uses of NLP using emergency medicine physician visit notes for various projects and the challenges of retrieving specific data and finally present practical methods that can run on a standard personal computer as well as high-end state-of-the-art funded processes run by leading NLP informatics researchers.

  7. Experiments with a First Prototype of a Spatial Model of Cultural Meaning through Natural-Language Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Oliver Schürer

    2018-01-01

    Full Text Available When using assistive systems, the consideration of individual and cultural meaning is crucial for the utility and acceptance of technology. Orientation, communication and interaction are rooted in perception and therefore always happen in material space. We understand that a major problem lies in the difference between human and technical perception of space. Cultural policies are based on meanings including their spatial situation and their rich relationships. Therefore, we have developed an approach where the different perception systems share a hybrid spatial model that is generated by artificial intelligence—a joint effort by humans and assistive systems. The aim of our project is to create a spatial model of cultural meaning based on interaction between humans and robots. We define the role of humanoid robots as becoming our companions. This calls for technical systems to include still inconceivable human and cultural agendas for the perception of space. In two experiments, we tested a first prototype of the communication module that allows a humanoid to learn cultural meanings through a machine learning system. Interaction is achieved by non-verbal and natural-language communication between humanoids and test persons. This helps us to better understand how a spatial model of cultural meaning can be developed.

  8. Per-service supervised learning for identifying desired WoT apps from user requests in natural language.

    Science.gov (United States)

    Yoon, Young

    2017-01-01

    Web of Things (WoT) platforms are growing fast so as the needs for composing WoT apps more easily and efficiently. We have recently commenced the campaign to develop an interface where users can issue requests for WoT apps entirely in natural language. This requires an effort to build a system that can learn to identify relevant WoT functions that fulfill user's requests. In our preceding work, we trained a supervised learning system with thousands of publicly-available IFTTT app recipes based on conditional random fields (CRF). However, the sub-par accuracy and excessive training time motivated us to devise a better approach. In this paper, we present a novel solution that creates a separate learning engine for each trigger service. With this approach, parallel and incremental learning becomes possible. For inference, our system first identifies the most relevant trigger service for a given user request by using an information retrieval technique. Then, the learning engine associated with the trigger service predicts the most likely pair of trigger and action functions. We expect that such two-phase inference method given parallel learning engines would improve the accuracy of identifying related WoT functions. We verify our new solution through the empirical evaluation with training and test sets sampled from a pool of refined IFTTT app recipes. We also meticulously analyze the characteristics of the recipes to find future research directions.

  9. Understanding Language in Education and Grade 4 Reading Performance Using a "Natural Experiment" of Botswana and South Africa

    Science.gov (United States)

    Shepherd, Debra Lynne

    2018-01-01

    The regional and cultural closeness of Botswana and South Africa, as well as differences in their political histories and language policy stances, offers a unique opportunity to evaluate the role of language in reading outcomes. This study aims to empirically test the effect of exposure to mother tongue and English instruction on the reading…

  10. The Importance of Natural Change in Planning School-Based Intervention for Children with Developmental Language Impairment (DLI)

    Science.gov (United States)

    Botting, Nicola; Gaynor, Marguerite; Tucker, Katie; Orchard-Lisle, Ginnie

    2016-01-01

    Some reports suggest that there is an increase in the number of children identified as having developmental language impairment (Bercow, 2008). yet resource issues have meant that many speech and language therapy services have compromised provision in some way. Thus, efficient ways of identifying need and prioritizing intervention are required.…

  11. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  12. Common data model for natural language processing based on two existing standard information models: CDA+GrAF.

    Science.gov (United States)

    Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D

    2012-08-01

    An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Identification of Long Bone Fractures in Radiology Reports Using Natural Language Processing to support Healthcare Quality Improvement.

    Science.gov (United States)

    Grundmeier, Robert W; Masino, Aaron J; Casper, T Charles; Dean, Jonathan M; Bell, Jamie; Enriquez, Rene; Deakyne, Sara; Chamberlain, James M; Alpern, Elizabeth R

    2016-11-09

    Important information to support healthcare quality improvement is often recorded in free text documents such as radiology reports. Natural language processing (NLP) methods may help extract this information, but these methods have rarely been applied outside the research laboratories where they were developed. To implement and validate NLP tools to identify long bone fractures for pediatric emergency medicine quality improvement. Using freely available statistical software packages, we implemented NLP methods to identify long bone fractures from radiology reports. A sample of 1,000 radiology reports was used to construct three candidate classification models. A test set of 500 reports was used to validate the model performance. Blinded manual review of radiology reports by two independent physicians provided the reference standard. Each radiology report was segmented and word stem and bigram features were constructed. Common English "stop words" and rare features were excluded. We used 10-fold cross-validation to select optimal configuration parameters for each model. Accuracy, recall, precision and the F1 score were calculated. The final model was compared to the use of diagnosis codes for the identification of patients with long bone fractures. There were 329 unique word stems and 344 bigrams in the training documents. A support vector machine classifier with Gaussian kernel performed best on the test set with accuracy=0.958, recall=0.969, precision=0.940, and F1 score=0.954. Optimal parameters for this model were cost=4 and gamma=0.005. The three classification models that we tested all performed better than diagnosis codes in terms of accuracy, precision, and F1 score (diagnosis code accuracy=0.932, recall=0.960, precision=0.896, and F1 score=0.927). NLP methods using a corpus of 1,000 training documents accurately identified acute long bone fractures from radiology reports. Strategic use of straightforward NLP methods, implemented with freely available

  14. Development of a Natural Language Processing Engine to Generate Bladder Cancer Pathology Data for Health Services Research.

    Science.gov (United States)

    Schroeck, Florian R; Patterson, Olga V; Alba, Patrick R; Pattison, Erik A; Seigne, John D; DuVall, Scott L; Robertson, Douglas J; Sirovich, Brenda; Goodney, Philip P

    2017-12-01

    To take the first step toward assembling population-based cohorts of patients with bladder cancer with longitudinal pathology data, we developed and validated a natural language processing (NLP) engine that abstracts pathology data from full-text pathology reports. Using 600 bladder pathology reports randomly selected from the Department of Veterans Affairs, we developed and validated an NLP engine to abstract data on histology, invasion (presence vs absence and depth), grade, the presence of muscularis propria, and the presence of carcinoma in situ. Our gold standard was based on an independent review of reports by 2 urologists, followed by adjudication. We assessed the NLP performance by calculating the accuracy, the positive predictive value, and the sensitivity. We subsequently applied the NLP engine to pathology reports from 10,725 patients with bladder cancer. When comparing the NLP output to the gold standard, NLP achieved the highest accuracy (0.98) for the presence vs the absence of carcinoma in situ. Accuracy for histology, invasion (presence vs absence), grade, and the presence of muscularis propria ranged from 0.83 to 0.96. The most challenging variable was depth of invasion (accuracy 0.68), with an acceptable positive predictive value for lamina propria (0.82) and for muscularis propria (0.87) invasion. The validated engine was capable of abstracting pathologic characteristics for 99% of the patients with bladder cancer. NLP had high accuracy for 5 of 6 variables and abstracted data for the vast majority of the patients. This now allows for the assembly of population-based cohorts with longitudinal pathology data. Published by Elsevier Inc.

  15. Improving performance of natural language processing part-of-speech tagging on clinical narratives through domain adaptation.

    Science.gov (United States)

    Ferraro, Jeffrey P; Daumé, Hal; Duvall, Scott L; Chapman, Wendy W; Harkema, Henk; Haug, Peter J

    2013-01-01

    Natural language processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives. Two target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt. The evaluated POS taggers drop in accuracy by 8.5-15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3-91.0% on clinical texts. ClinAdapt reports 93.2-93.9%. ClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks.

  16. The Common Alerting Protocol (CAP) and Emergency Data Exchange Language (EDXL) - Application in Early Warning Systems for Natural Hazard

    Science.gov (United States)

    Lendholt, Matthias; Hammitzsch, Martin; Wächter, Joachim

    2010-05-01

    The Common Alerting Protocol (CAP) [1] is an XML-based data format for exchanging public warnings and emergencies between alerting technologies. In conjunction with the Emergency Data Exchange Language (EDXL) Distribution Element (-DE) [2] these data formats can be used for warning message dissemination in early warning systems for natural hazards. Application took place in the DEWS (Distance Early Warning System) [3] project where CAP serves as central message format containing both human readable warnings and structured data for automatic processing by message receivers. In particular the spatial reference capabilities are of paramount importance both in CAP and EDXL. Affected areas are addressable via geo codes like HASC (Hierarchical Administrative Subdivision Codes) [4] or UN/LOCODE [5] but also with arbitrary polygons that can be directly generated out of GML [6]. For each affected area standardized criticality values (urgency, severity and certainty) have to be set but also application specific key-value-pairs like estimated time of arrival or maximum inundation height can be specified. This enables - together with multilingualism, message aggregation and message conversion for different dissemination channels - the generation of user-specific tailored warning messages. [1] CAP, http://www.oasis-emergency.org/cap [2] EDXL-DE, http://docs.oasis-open.org/emergency/edxl-de/v1.0/EDXL-DE_Spec_v1.0.pdf [3] DEWS, http://www.dews-online.org [4] HASC, "Administrative Subdivisions of Countries: A Comprehensive World Reference, 1900 Through 1998" ISBN 0-7864-0729-8 [5] UN/LOCODE, http://www.unece.org/cefact/codesfortrade/codes_index.htm [6] GML, http://www.opengeospatial.org/standards/gml

  17. Automated identification of wound information in clinical notes of patients with heart diseases: Developing and validating a natural language processing application.

    Science.gov (United States)

    Topaz, Maxim; Lai, Kenneth; Dowding, Dawn; Lei, Victor J; Zisberg, Anna; Bowles, Kathryn H; Zhou, Li

    2016-12-01

    Electronic health records are being increasingly used by nurses with up to 80% of the health data recorded as free text. However, only a few studies have developed nursing-relevant tools that help busy clinicians to identify information they need at the point of care. This study developed and validated one of the first automated natural language processing applications to extract wound information (wound type, pressure ulcer stage, wound size, anatomic location, and wound treatment) from free text clinical notes. First, two human annotators manually reviewed a purposeful training sample (n=360) and random test sample (n=1100) of clinical notes (including 50% discharge summaries and 50% outpatient notes), identified wound cases, and created a gold standard dataset. We then trained and tested our natural language processing system (known as MTERMS) to process the wound information. Finally, we assessed our automated approach by comparing system-generated findings against the gold standard. We also compared the prevalence of wound cases identified from free-text data with coded diagnoses in the structured data. The testing dataset included 101 notes (9.2%) with wound information. The overall system performance was good (F-measure is a compiled measure of system's accuracy=92.7%), with best results for wound treatment (F-measure=95.7%) and poorest results for wound size (F-measure=81.9%). Only 46.5% of wound notes had a structured code for a wound diagnosis. The natural language processing system achieved good performance on a subset of randomly selected discharge summaries and outpatient notes. In more than half of the wound notes, there were no coded wound diagnoses, which highlight the significance of using natural language processing to enrich clinical decision making. Our future steps will include expansion of the application's information coverage to other relevant wound factors and validation of the model with external data. Copyright © 2016 Elsevier Ltd. All

  18. Development of a user friendly interface for database querying in natural language by using concepts and means related to artificial intelligence

    International Nuclear Information System (INIS)

    Pujo, Pascal

    1989-01-01

    This research thesis reports the development of a user-friendly interface in natural language for querying a relational database. The developed system differs from usual approaches for its integrated architecture as the relational model management is totally controlled by the interface. The author first addresses the way to store data in order to make them accessible through an interface in natural language, and more precisely to store data with an organisation which would result in the less possible constraints in query formulation. The author then briefly presents techniques related to automatic processing in natural language, and discusses the implications of a better user-friendliness and for error processing. The next part reports the study of the developed interface: selection of data processing tools, interface development, data management at the interface level, information input by the user. The last chapter proposes an overview of possible evolutions for the interface: use of deductive functionalities, use of an extensional base and of an intentional base to deduce facts from knowledge stores in the extensional base, and handling of complex objects [fr

  19. A corpus of full-text journal articles is a robust evaluation tool for revealing differences in performance of biomedical natural language processing tools.

    Science.gov (United States)

    Verspoor, Karin; Cohen, Kevin Bretonnel; Lanfranchi, Arrick; Warner, Colin; Johnson, Helen L; Roeder, Christophe; Choi, Jinho D; Funk, Christopher; Malenkiy, Yuriy; Eckert, Miriam; Xue, Nianwen; Baumgartner, William A; Bada, Michael; Palmer, Martha; Hunter, Lawrence E

    2012-08-17

    We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus. Many biomedical natural language processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data. The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable natural language processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical natural language processing community for evaluation and training of new models for biomedical full text publications.

  20. A case of "order insensitivity"? Natural and artificial language processing in a man with primary progressive aphasia.

    OpenAIRE

    Zimmerer, V. C.; Varley, R. A.

    2015-01-01

    Processing of linear word order (linear configuration) is important for virtually all languages and essential to languages such as English which have little functional morphology. Damage to systems underpinning configurational processing may specifically affect word-order reliant sentence structures. We explore order processing in WR, a man with primary progressive aphasia (PPA). In a previous report, we showed how WR showed impaired processing of actives, which rely strongly on word order, b...

  1. Web 2.0-based crowdsourcing for high-quality gold standard development in clinical natural language processing.

    Science.gov (United States)

    Zhai, Haijun; Lingren, Todd; Deleger, Louise; Li, Qi; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre

    2013-04-02

    A high-quality gold standard is vital for supervised, machine learning-based, clinical natural language processing (NLP) systems. In clinical NLP projects, expert annotators traditionally create the gold standard. However, traditional annotation is expensive and time-consuming. To reduce the cost of annotation, general NLP projects have turned to crowdsourcing based on Web 2.0 technology, which involves submitting smaller subtasks to a coordinated marketplace of workers on the Internet. Many studies have been conducted in the area of crowdsourcing, but only a few have focused on tasks in the general NLP field and only a handful in the biomedical domain, usually based upon very small pilot sample sizes. In addition, the quality of the crowdsourced biomedical NLP corpora were never exceptional when compared to traditionally-developed gold standards. The previously reported results on medical named entity annotation task showed a 0.68 F-measure based agreement between crowdsourced and traditionally-developed corpora. Building upon previous work from the general crowdsourcing research, this study investigated the usability of crowdsourcing in the clinical NLP domain with special emphasis on achieving high agreement between crowdsourced and traditionally-developed corpora. To build the gold standard for evaluating the crowdsourcing workers' performance, 1042 clinical trial announcements (CTAs) from the ClinicalTrials.gov website were randomly selected and double annotated for medication names, medication types, and linked attributes. For the experiments, we used CrowdFlower, an Amazon Mechanical Turk-based crowdsourcing platform. We calculated sensitivity, precision, and F-measure to evaluate the quality of the crowd's work and tested the statistical significance (Pcrowdsourced and traditionally-developed annotations. The agreement between the crowd's annotations and the traditionally-generated corpora was high for: (1) annotations (0.87, F-measure for medication names

  2. An Early Years Toolbox for Assessing Early Executive Function, Language, Self-Regulation, and Social Development: Validity, Reliability, and Preliminary Norms

    Science.gov (United States)

    Howard, Steven J.; Melhuish, Edward

    2017-01-01

    Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years…

  3. A new, rapid and reliable method for the determination of reduced sulphur (S{sup 2-}) species in natural water discharges

    Energy Technology Data Exchange (ETDEWEB)

    Montegrossi, Giordano [C.N.R. - Institute of Geosciences and Earth Resources, Via G. La Pira 4, 50121 Florence (Italy)]. E-mail: giordano@geo.unifi.it; Tassi, Franco [Department of Earth Sciences, University of Florence, Via G. La Pira 4, 50121 Florence (Italy); Vaselli, Orlando [C.N.R. - Institute of Geosciences and Earth Resources, Via G. La Pira 4, 50121 Florence (Italy); Department of Earth Sciences, University of Florence, Via G. La Pira 4, 50121 Florence (Italy); Bidini, Eva [Department of Earth Sciences, University of Florence, Via G. La Pira 4, 50121 Florence (Italy); Minissale, Angelo [C.N.R. - Institute of Geosciences and Earth Resources, Via G. La Pira 4, 50121 Florence (Italy)

    2006-05-15

    The determination of reduced S species in natural waters is particularly difficult due to their high instability and chemical and physical interferences in the current analytical methods. In this paper a new, rapid and reliable analytical procedure is presented, named the Cd-IC method, for their determination as {sigma}S{sup 2-} via oxidation to SO{sub 4}{sup 2-} after chemical trapping with an ammonia-cadmium solution that allows precipitation of all the reduced S species as CdS. The S{sup 2-}-SO{sub 4} is analysed by ion-chromatography. The main advantages of this method are: low cost, high stability of CdS precipitate, absence of interferences, low detection limit (0.01mg/L as SO{sub 4} for 10mL of water) and low analytical error (about 5%). The proposed method has been applied to more than 100 water samples from different natural systems (water discharges and cold wells from volcanic and geothermal areas, crater lakes) in central-southern Italy.

  4. Introducing a gender-neutral pronoun in a natural gender language: the influence of time on attitudes and behavior.

    Science.gov (United States)

    Gustafsson Sendén, Marie; Bäck, Emma A; Lindqvist, Anna

    2015-01-01

    The implementation of gender fair language is often associated with negative reactions and hostile attacks on people who propose a change. This was also the case in Sweden in 2012 when a third gender-neutral pronoun hen was proposed as an addition to the already existing Swedish pronouns for she (hon) and he (han). The pronoun hen can be used both generically, when gender is unknown or irrelevant, and as a transgender pronoun for people who categorize themselves outside the gender dichotomy. In this article we review the process from 2012 to 2015. No other language has so far added a third gender-neutral pronoun, existing parallel with two gendered pronouns, that actually have reached the broader population of language users. This makes the situation in Sweden unique. We present data on attitudes toward hen during the past 4 years and analyze how time is associated with the attitudes in the process of introducing hen to the Swedish language. In 2012 the majority of the Swedish population was negative to the word, but already in 2014 there was a significant shift to more positive attitudes. Time was one of the strongest predictors for attitudes also when other relevant factors were controlled for. The actual use of the word also increased, although to a lesser extent than the attitudes shifted. We conclude that new words challenging the binary gender system evoke hostile and negative reactions, but also that attitudes can normalize rather quickly. We see this finding very positive and hope it could motivate language amendments and initiatives for gender-fair language, although the first responses may be negative.

  5. Comparison Between Manual Auditing and a Natural Language Process With Machine Learning Algorithm to Evaluate Faculty Use of Standardized Reports in Radiology.

    Science.gov (United States)

    Guimaraes, Carolina V; Grzeszczuk, Robert; Bisset, George S; Donnelly, Lane F

    2018-03-01

    When implementing or monitoring department-sanctioned standardized radiology reports, feedback about individual faculty performance has been shown to be a useful driver of faculty compliance. Most commonly, these data are derived from manual audit, which can be both time-consuming and subject to sampling error. The purpose of this study was to evaluate whether a software program using natural language processing and machine learning could accurately audit radiologist compliance with the use of standardized reports compared with performed manual audits. Radiology reports from a 1-month period were loaded into such a software program, and faculty compliance with use of standardized reports was calculated. For that same period, manual audits were performed (25 reports audited for each of 42 faculty members). The mean compliance rates calculated by automated auditing were then compared with the confidence interval of the mean rate by manual audit. The mean compliance rate for use of standardized reports as determined by manual audit was 91.2% with a confidence interval between 89.3% and 92.8%. The mean compliance rate calculated by automated auditing was 92.0%, within that confidence interval. This study shows that by use of natural language processing and machine learning algorithms, an automated analysis can accurately define whether reports are compliant with use of standardized report templates and language, compared with manual audits. This may avoid significant labor costs related to conducting the manual auditing process. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  6. Sample Length Affects the Reliability of Language Sample Measures in 3-Year-Olds: Evidence from Parent-Elicited Conversational Samples

    Science.gov (United States)

    Guo, Ling-Yu; Eisenberg, Sarita

    2015-01-01

    Purpose: The goal of this study was to investigate the extent to which sample length affected the reliability of total number of words (TNW), number of different words (NDW), and mean length of C-units in morphemes (MLCUm) in parent-elicited conversational samples for 3-year-olds. Method: Participants were sixty 3-year-olds. A 22-min language…

  7. Paying Attention to Attention Allocation in Second-Language Learning: Some Insights into the Nature of Linguistic Thresholds.

    Science.gov (United States)

    Hawson, Anne

    1997-01-01

    Three threshold hypotheses proposed by Cummins (1976) and Diaz (1985) as explanations of data on the cognitive consequences of bilingualism are examined in depth and compared to one another. A neuroscientifically updated information-processing perspective on the interaction of second-language comprehension and visual-processing ability is…

  8. Introducing a gender-neutral pronoun in a natural gender language: The influence of time on attitudes and behavior

    Directory of Open Access Journals (Sweden)

    Marie eGustafsson Sendén

    2015-07-01

    Full Text Available The implementation of gender fair language is often associated with negative reactions and hostile attack on people who propose a change. This was also the case in Sweden in 2012 when a third gender-neutral pronoun hen was proposed as an addition to the already existing Swedish pronouns for she and he. The pronoun hen can be used both generically, when gender is unknown or irrelevant, and as a transgender pronoun for people who categorize themselves outside the gender dichotomy. In this article we review the process from 2012 to 2015 when hen has been introduced in the Swedish Dictionary. No other language has so far added a third gender-neutral pronoun that actually has reached the broader population of language users, which makes the situation in Sweden unique. We present data on attitudes toward hen during the recent four years and study how time is associated with the attitudes. In 2012 the majority of the Swedish population was negative to the word, but already in 2014 there was a significant shift to more positive attitudes. Time was one of the strongest predictors for attitudes also when other relevant factors were controlled for. Even though to a lesser extent than the attitudes, the actual use of the word has also increased. We conclude that new words challenging the binary gender system evoke hostile and negative reactions, but also that attitudes can normalize rather quickly. This is very positive because it should motivate language amendments and initiatives for gender-fair language although the first responses are negative.

  9. First Language Acquisition and Teaching

    Science.gov (United States)

    Cruz-Ferreira, Madalena

    2011-01-01

    "First language acquisition" commonly means the acquisition of a single language in childhood, regardless of the number of languages in a child's natural environment. Language acquisition is variously viewed as predetermined, wondrous, a source of concern, and as developing through formal processes. "First language teaching" concerns schooling in…

  10. natural

    Directory of Open Access Journals (Sweden)

    Elías Gómez Macías

    2006-01-01

    Full Text Available Partiendo de óxido de magnesio comercial se preparó una suspensión acuosa, la cual se secó y calcinó para conferirle estabilidad térmica. El material, tanto fresco como usado, se caracterizó mediante DRX, área superficial BET y SEM-EPMA. El catalizador mostró una matriz de MgO tipo periclasa con CaO en la superficie. Las pruebas de actividad catalítica se efectuaron en lecho fijo empacado con partículas obtenidas mediante prensado, trituración y clasificación del material. El flujo de reactivos consistió en mezclas gas natural-aire por debajo del límite inferior de inflamabilidad. Para diferentes flujos y temperaturas de entrada de la mezcla reactiva, se midieron las concentraciones de CH4, CO2 y CO en los gases de combustión con un analizador de gases tipo infrarrojo no dispersivo (NDIR. Para alcanzar conversión total de metano se requirió aumentar la temperatura de entrada al lecho a medida que se incrementó el flujo de gases reaccionantes. Los resultados obtenidos permiten desarrollar un sistema de combustión catalítica de bajo costo con un material térmicamente estable, que promueva la alta eficiencia en la combustión de gas natural y elimine los problemas de estabilidad, seguridad y de impacto ambiental negativo inherentes a los procesos de combustión térmica convencional.

  11. Proceedings of the Strategic Computing Natural Language Workshop Held in Marina del Rey, California on 1-2 May 1986.

    Science.gov (United States)

    1986-05-01

    language interface to these new capabilities as well as to the existing data bases and graphic display facilities. BBN is developing a series of...Action. Artificial Intelligence , 1986. Forthcoming. [Hinrichs 81] Hinrichs, E. Temporale Anaphora um Englischen. 1981. Unpublished ms., University of...organized by NIKL has been demonstrated for a wide variety of sentence types. Table 3 shows a series of independent sentences that Penman is now able

  12. Contralog: a Prolog conform forward-chaining environment and its application for dynamic programming and natural language parsing

    Directory of Open Access Journals (Sweden)

    Kilián Imre

    2016-06-01

    Full Text Available The backward-chaining inference strategy of Prolog is inefficient for a number of problems. The article proposes Contralog: a Prolog-conform, forward-chaining language and an inference engine that is implemented as a preprocessor-compiler to Prolog. The target model is Prolog, which ensures mutual switching from Contralog to Prolog and back. The Contralog compiler is implemented using Prolog's de facto standardized macro expansion capability. The article goes into details regarding the target model.

  13. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  14. How reliable are gray matter disruptions in specific reading disability across multiple countries and languages? Insights from a large-scale voxel-based morphometry study.

    Science.gov (United States)

    Jednoróg, Katarzyna; Marchewka, Artur; Altarelli, Irene; Monzalvo Lopez, Ana Karla; van Ermingen-Marbach, Muna; Grande, Marion; Grabowska, Anna; Heim, Stefan; Ramus, Franck

    2015-05-01

    The neural basis of specific reading disability (SRD) remains only partly understood. A dozen studies have used voxel-based morphometry (VBM) to investigate gray matter volume (GMV) differences between SRD and control children, however, recent meta-analyses suggest that few regions are consistent across studies. We used data collected across three countries (France, Poland, and Germany) with the aim of both increasing sample size (236 SRD and controls) to obtain a clearer picture of group differences, and of further assessing the consistency of the findings across languages. VBM analysis reveals a significant group difference in a single cluster in the left thalamus. Furthermore, we observe correlations between reading accuracy and GMV in the left supramarginal gyrus and in the left cerebellum, in controls only. Most strikingly, we fail to replicate all the group differences in GMV reported in previous studies, despite the superior statistical power. The main limitation of this study is the heterogeneity of the sample drawn from different countries (i.e., speaking languages with varying orthographic transparencies) and selected based on different assessment batteries. Nevertheless, analyses within each country support the conclusions of the cross-linguistic analysis. Explanations for the discrepancy between the present and previous studies may include: (1) the limited suitability of VBM to reveal the subtle brain disruptions underlying SRD; (2) insufficient correction for multiple statistical tests and flexibility in data analysis, and (3) publication bias in favor of positive results. Thus the study echoes widespread concerns about the risk of false-positive results inherent to small-scale VBM studies. © 2015 Wiley Periodicals, Inc.

  15. Simultaneous natural speech and AAC interventions for children with childhood apraxia of speech: lessons from a speech-language pathologist focus group.

    Science.gov (United States)

    Oommen, Elizabeth R; McCarthy, John W

    2015-03-01

    In childhood apraxia of speech (CAS), children exhibit varying levels of speech intelligibility depending on the nature of errors in articulation and prosody. Augmentative and alternative communication (AAC) strategies are beneficial, and commonly adopted with children with CAS. This study focused on the decision-making process and strategies adopted by speech-language pathologists (SLPs) when simultaneously implementing interventions that focused on natural speech and AAC. Eight SLPs, with significant clinical experience in CAS and AAC interventions, participated in an online focus group. Thematic analysis revealed eight themes: key decision-making factors; treatment history and rationale; benefits; challenges; therapy strategies and activities; collaboration with team members; recommendations; and other comments. Results are discussed along with clinical implications and directions for future research.

  16. Reliability and Validity of the Persian Language Version of the International Consultation on Incontinence Questionnaire - Male Lower Urinary Tract Symptoms (ICIQ-MLUTS).

    Science.gov (United States)

    Pourmomeny, Abbas Ali; Ghanei, Behnaz; Alizadeh, Farshid

    2018-05-01

    Assessment instruments are essential for research, allowing diagnosis and evaluating treatment outcomes in subjects with lower urinary tract disorders of both genders. The purpose of this study was to translate the Male Lower Urinary Tract Symptoms (MLUTS) Questionnaire and determine its psychometric properties in Persian subjects. After getting permission from the International Consultation on Incontinence Modular Questionnaire (ICIQ) web site, the forward and backward translation of the MLUTS questionnaire were carried out by researcher team. The content/face validity, construct validity and reliability were assessed in a sample of MLUTS Iranian patients by measuring with the Cronbach's alpha test. In total, 121 male patients were included in the study. The mean age of the patients was 60.5 years. Cronbach alpha value was 0.757, consecrated the internal consistency of the form (r > 0.7). The internal consistency of each question was examined separately and found to be over 0.7. For the evaluation of reliability test-retest was done, the test was administered to 20% of the patients for a second time with an interval of 1-2 weeks. The intraclass correlation coefficient (ICC) score was 0.901. The Correlation coefficient between the MLUTS and International Prostate Symptoms Score (IPSS) was 0.879. ICIQ-MLUTS is a robust instrument, which can be used for evaluating male LUTS in Persian patients. We believe that the Persian version of the MLUTS is an important tool for research and clinical setting. © 2017 John Wiley & Sons Australia, Ltd.

  17. Next Generation Systems Languages

    National Research Council Canada - National Science Library

    Morrisett, Greg

    2006-01-01

    The goal of this work is to explore techniques for making today's software, which is largely written in type-unsafe, low-level languages such as C, as reliable and trustworthy as code written in type...

  18. Automatic classification of written descriptions by healthy adults: An overview of the application of natural language processing and machine learning techniques to clinical discourse analysis.

    Science.gov (United States)

    Toledo, Cíntia Matsuda; Cunha, Andre; Scarton, Carolina; Aluísio, Sandra

    2014-01-01

    Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario. The aims were to describe how to:(i) develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and(ii) automatically identify the features that best distinguish the groups. The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described - simple or complex; presentation order - which type of picture was described first; and age). In this study, the descriptions by 144 of the subjects studied in Toledo 18 were used,which included 200 healthy Brazilians of both genders. A Support Vector Machine (SVM) with a radial basis function (RBF) kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS) is a strong candidate to replace manual feature selection methods.

  19. Automatic classification of written descriptions by healthy adults: An overview of the application of natural language processing and machine learning techniques to clinical discourse analysis

    Directory of Open Access Journals (Sweden)

    Cíntia Matsuda Toledo

    Full Text Available Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario.OBJECTIVE: The aims were to describe how to: (i develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and (ii automatically identify the features that best distinguish the groups.METHODS: The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described - simple or complex; presentation order - which type of picture was described first; and age. In this study, the descriptions by 144 of the subjects studied in Toledo18 were used, which included 200 healthy Brazilians of both genders.RESULTS AND CONCLUSION:A Support Vector Machine (SVM with a radial basis function (RBF kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS is a strong candidate to replace manual feature selection methods.

  20. Accommodating Grief on Twitter: An Analysis of Expressions of Grief Among Gang Involved Youth on Twitter Using Qualitative Analysis and Natural Language Processing

    Science.gov (United States)

    Patton, Desmond Upton; MacBeth, Jamie; Schoenebeck, Sarita; Shear, Katherine; McKeown, Kathleen

    2018-01-01

    There is a dearth of research investigating youths’ experience of grief and mourning after the death of close friends or family. Even less research has explored the question of how youth use social media sites to engage in the grieving process. This study employs qualitative analysis and natural language processing to examine tweets that follow 2 deaths. First, we conducted a close textual read on a sample of tweets by Gakirah Barnes, a gang-involved teenaged girl in Chicago, and members of her Twitter network, over a 19-day period in 2014 during which 2 significant deaths occurred: that of Raason “Lil B” Shaw and Gakirah’s own death. We leverage the grief literature to understand the way Gakirah and her peers express thoughts, feelings, and behaviors at the time of these deaths. We also present and explain the rich and complex style of online communication among gang-involved youth, one that has been overlooked in prior research. Next, we overview the natural language processing output for expressions of loss and grief in our data set based on qualitative findings and present an error analysis on its output for grief. We conclude with a call for interdisciplinary research that analyzes online and offline behaviors to help understand physical and emotional violence and other problematic behaviors prevalent among marginalized communities. PMID:29636619

  1. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  2. Language Models With Meta-information

    NARCIS (Netherlands)

    Shi, Y.

    2014-01-01

    Language modeling plays a critical role in natural language processing and understanding. Starting from a general structure, language models are able to learn natural language patterns from rich input data. However, the state-of-the-art language models only take advantage of words themselves, which

  3. Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part I: Denotational Semantics, Natural Semantics, and Abstract Machines

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2008-01-01

    We derive two big-step abstract machines, a natural semantics, and the valuation function of a denotational semantics based on the small-step abstract machine for Core Scheme presented by Clinger at PLDI'98. Starting from a functional implementation of this small-step abstract machine, (1) we fuse...... its transition function with its driver loop, obtaining the functional implementation of a big-step abstract machine; (2) we adjust this big-step abstract machine so that it is in defunctionalized form, obtaining the functional implementation of a second big-step abstract machine; (3) we...... refunctionalize this adjusted abstract machine, obtaining the functional implementation of a natural semantics in continuation style; and (4) we closure-unconvert this natural semantics, obtaining a compositional continuation-passing evaluation function which we identify as the functional implementation...

  4. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  5. Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part I: Denotational Semantics, Natural Semantics, and Abstract Machines

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2009-01-01

    We derive two big-step abstract machines, a natural semantics, and the valuation function of a denotational semantics based on the small-step abstract machine for Core Scheme presented by Clinger at PLDI'98. Starting from a functional implementation of this small-step abstract machine, (1) we fus...

  6. The Linguistic Interpretation for Language Union – Language Family

    Directory of Open Access Journals (Sweden)

    E.A. Balalykina

    2016-10-01

    Full Text Available The paper is dedicated to the problem of determination of the essence of language union and language family in modern linguistics, which is considered important, because these terms are often used as absolute synonyms. The research is relevant due to the need to distinguish the features of languages that are inherited during their functioning within either language union or language family when these languages are compared. The research has been carried out in order to present the historical background of the problem and to justify the need for differentiation of language facts that allow relating languages to particular language union or language family. In order to fulfill the goal of this work, descriptive, comparative, and historical methods have been used. A range of examples has been provided to prove that some languages, mainly Slavonic and Baltic languages, form a language family rather than a language union, because a whole number of features in their systems are the heritage of their common Indo-European past. Firstly, it is necessary to take into account changes having either common or different nature in the system of particular languages; secondly, one must have a precise idea of what features in the phonetic and morphological systems of compared languages allow to relate them to language union or language family; thirdly, it must be determined whether the changes in compared languages are regular or of any other type. On the basis of the obtained results, the following conclusions have been drawn: language union and language family are two different types of relations between modern languages; they allow identifying both degree of similarity of these languages and causes of differences between them. It is most important that one should distinguish and describe the specific features of two basic groups of languages forming language family or language union. The results obtained during the analysis are very important for linguistics

  7. Natural language processing: state of the art and prospects for significant progress, a workshop sponsored by the National Library of Medicine.

    Science.gov (United States)

    Friedman, Carol; Rindflesch, Thomas C; Corn, Milton

    2013-10-01

    Natural language processing (NLP) is crucial for advancing healthcare because it is needed to transform relevant information locked in text into structured data that can be used by computer processes aimed at improving patient care and advancing medicine. In light of the importance of NLP to health, the National Library of Medicine (NLM) recently sponsored a workshop to review the state of the art in NLP focusing on text in English, both in biomedicine and in the general language domain. Specific goals of the NLM-sponsored workshop were to identify the current state of the art, grand challenges and specific roadblocks, and to identify effective use and best practices. This paper reports on the main outcomes of the workshop, including an overview of the state of the art, strategies for advancing the field, and obstacles that need to be addressed, resulting in recommendations for a research agenda intended to advance the field. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Complex analyses on clinical information systems using restricted natural language querying to resolve time-event dependencies.

    Science.gov (United States)

    Safari, Leila; Patrick, Jon D

    2018-06-01

    This paper reports on a generic framework to provide clinicians with the ability to conduct complex analyses on elaborate research topics using cascaded queries to resolve internal time-event dependencies in the research questions, as an extension to the proposed Clinical Data Analytics Language (CliniDAL). A cascaded query model is proposed to resolve internal time-event dependencies in the queries which can have up to five levels of criteria starting with a query to define subjects to be admitted into a study, followed by a query to define the time span of the experiment. Three more cascaded queries can be required to define control groups, control variables and output variables which all together simulate a real scientific experiment. According to the complexity of the research questions, the cascaded query model has the flexibility of merging some lower level queries for simple research questions or adding a nested query to each level to compose more complex queries. Three different scenarios (one of them contains two studies) are described and used for evaluation of the proposed solution. CliniDAL's complex analyses solution enables answering complex queries with time-event dependencies at most in a few hours which manually would take many days. An evaluation of results of the research studies based on the comparison between CliniDAL and SQL solutions reveals high usability and efficiency of CliniDAL's solution. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Language Contact.

    Science.gov (United States)

    Nelde, Peter Hans

    1995-01-01

    Examines the phenomenon of language contact and recent trends in linguistic contact research, which focuses on language use, language users, and language spheres. Also discusses the role of linguistic and cultural conflicts in language contact situations. (13 references) (MDM)

  10. Propaedeutics of Mathematical Language of Schemes and Structures in School Teaching of the Natural Sciences Profile

    Directory of Open Access Journals (Sweden)

    V. P. Kotchnev

    2012-01-01

    Full Text Available The paper looks at the teaching process at schools of the natural sciences profile. The subject of the research is devoted to the correlations between the students’ progress and the degree of their involvement in creative activities of problem solving in the natural sciences context. The research is aimed to demonstrate the reinforce- ment of students’ creative learning by teaching mathematical schemes and structures. The comparative characteristics of the task, problem and model approaches to mathematical problem solving are given; the experimental data on the efficiency of mathematical training based on the above approaches being discussed, as well as the specifics of modeling the tasks for problem solving. The author examines the ways for stimulating the students’ creative activity and motivating the knowledge acquisition, and search for the new mathematical conformities related to the natural science content. The significance of the Olympiad and other non-standard tasks, broadening the students’ horizons and stimulating creative thinking and abilities, is emphasized.The proposed method confirms the appropriateness of introducing the Olympiad and non-standard problem solving into the preparatory training curricula for the Unified State Examinations. 

  11. Gendered Language in Interactive Discourse

    Science.gov (United States)

    Hussey, Karen A.; Katz, Albert N.; Leith, Scott A.

    2015-01-01

    Over two studies, we examined the nature of gendered language in interactive discourse. In the first study, we analyzed gendered language from a chat corpus to see whether tokens of gendered language proposed in the gender-as-culture hypothesis (Maltz and Borker in "Language and social identity." Cambridge University Press, Cambridge, pp…

  12. The Tao of Whole Language.

    Science.gov (United States)

    Zola, Meguido

    1989-01-01

    Uses the philosophy of Taoism as a metaphor in describing the whole language approach to language arts instruction. The discussion covers the key principles that inform the whole language approach, the resulting holistic nature of language programs, and the role of the teacher in this approach. (16 references) (CLB)

  13. Simplexity, languages and human languaging

    DEFF Research Database (Denmark)

    Cowley, Stephen; Gahrn-Andersen, Rasmus

    2018-01-01

    Building on a distributed perspective, the Special Issue develops Alain Berthoz's concept of simplexity. By so doing, neurophysiology is used to reach beyond observable and, specifically, 1st-order languaging. While simplexity clarifies how language uses perception/action, a community's ‘lexicon......’ (a linguistic 2nd order) also shapes human powers. People use global constraints to make and construe wordings and bring a social/individual duality to human living. Within a field of perception-action-language, the phenomenology of ‘words’ and ‘things’ drives people to sustain their own experience....... Simplex tricks used in building bodies co-function with action that grants humans access to en-natured culture where, together, they build human knowing....

  14. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  15. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  16. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  17. Reliability training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  18. A Reliability Assessment of the Hydrostatic Test of Pipeline with 0.8 Design Factor in the West–East China Natural Gas Pipeline III

    Directory of Open Access Journals (Sweden)

    Kai Wen

    2018-05-01

    Full Text Available The use of 0.8 design factor in Chinese pipeline industry is a breakthrough with the success of the test pipe section in the west–east China gas pipeline III. For such a design factor, the traditional P-V (Pressure-Volume curve based pressure test control cannot describe the details of the process, and the 0/1 type failure is not an efficient index to show the safety level of the pipeline. In this paper, a reliability based assessment method is proposed to monitor the real-time failure probability of the pipeline during the hydrostatic test process. The reliability index can be used as the degree of risk. Following the actual hydrostatic testing of a test pipe section with 0.8 design factor in the west–east China gas pipeline III, reliability analysis was performed using Monte Carlo technique. The basic values of input parameters of the limit state equations are based on the data collected from either the tested section or the recommended value in the codes. The analysis of limit states, i.e., the yielding deformation and the excessive plastic deformation of pipeline, proceeded based on these distributions. Finally, it is found that the gradually increased water pressure makes the failure probability increase accordingly. A reliability assessment method was proposed and illustrated with the practical pressure test process.

  19. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  20. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)