WorldWideScience

Sample records for semantic spoken language

  1. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  2. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  3. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    Science.gov (United States)

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  4. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    Science.gov (United States)

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  5. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    Science.gov (United States)

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  7. The Functional Organisation of the Fronto-Temporal Language System: Evidence from Syntactic and Semantic Ambiguity

    Science.gov (United States)

    Rodd, Jennifer M.; Longe, Olivia A.; Randall, Billi; Tyler, Lorraine K.

    2010-01-01

    Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities…

  8. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  9. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  10. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    Science.gov (United States)

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  11. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  12. A real-time spoken-language system for interactive problem-solving, combining linguistic and statistical technology for improved spoken language understanding

    Science.gov (United States)

    Moore, Robert C.; Cohen, Michael H.

    1993-09-01

    Under this effort, SRI has developed spoken-language technology for interactive problem solving, featuring real-time performance for up to several thousand word vocabularies, high semantic accuracy, habitability within the domain, and robustness to many sources of variability. Although the technology is suitable for many applications, efforts to date have focused on developing an Air Travel Information System (ATIS) prototype application. SRI's ATIS system has been evaluated in four ARPA benchmark evaluations, and has consistently been at or near the top in performance. These achievements are the result of SRI's technical progress in speech recognition, natural-language processing, and speech and natural-language integration.

  13. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  14. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... languages and can be used for the purposes of spoken language identification. Keywords. SLID .... branch of linguistics to study the sound structure of human language. ... countries, work in the area of Indian language identification has not ...... English and speech database has been collected over tele-.

  15. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  16. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  17. Factors Influencing Verbal Intelligence and Spoken Language in Children with Phenylketonuria.

    Science.gov (United States)

    Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre

    2015-05-01

    To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.

  18. CROATIAN ADULT SPOKEN LANGUAGE CORPUS (HrAL

    Directory of Open Access Journals (Sweden)

    Jelena Kuvač Kraljević

    2016-01-01

    Full Text Available Interest in spoken-language corpora has increased over the past two decades leading to the development of new corpora and the discovery of new facets of spoken language. These types of corpora represent the most comprehensive data source about the language of ordinary speakers. Such corpora are based on spontaneous, unscripted speech defined by a variety of styles, registers and dialects. The aim of this paper is to present the Croatian Adult Spoken Language Corpus (HrAL, its structure and its possible applications in different linguistic subfields. HrAL was built by sampling spontaneous conversations among 617 speakers from all Croatian counties, and it comprises more than 250,000 tokens and more than 100,000 types. Data were collected during three time slots: from 2010 to 2012, from 2014 to 2015 and during 2016. HrAL is today available within TalkBank, a large database of spoken-language corpora covering different languages (https://talkbank.org, in the Conversational Analyses corpora within the subsection titled Conversational Banks. Data were transcribed, coded and segmented using the transcription format Codes for Human Analysis of Transcripts (CHAT and the Computerised Language Analysis (CLAN suite of programmes within the TalkBank toolkit. Speech streams were segmented into communication units (C-units based on syntactic criteria. Most transcripts were linked to their source audios. The TalkBank is public free, i.e. all data stored in it can be shared by the wider community in accordance with the basic rules of the TalkBank. HrAL provides information about spoken grammar and lexicon, discourse skills, error production and productivity in general. It may be useful for sociolinguistic research and studies of synchronic language changes in Croatian.

  19. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  20. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Spoken language outcomes after hemispherectomy: factoring in etiology.

    Science.gov (United States)

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  2. CASL The Common Algebraic Specification Language Semantics

    DEFF Research Database (Denmark)

    Haxthausen, Anne

    1998-01-01

    This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the first complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop at Cachan in November 1998.......This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the first complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop at Cachan in November 1998....

  3. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  4. Inferring Speaker Affect in Spoken Natural Language Communication

    OpenAIRE

    Pon-Barry, Heather Roberta

    2012-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, ...

  5. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    Science.gov (United States)

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  6. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  7. A dual contribution to the involuntary semantic processing of unexpected spoken words.

    Science.gov (United States)

    Parmentier, Fabrice B R; Turner, Jacqueline; Perez, Laura

    2014-02-01

    Sounds are a major cause of distraction. Unexpected to-be-ignored auditory stimuli presented in the context of an otherwise repetitive acoustic background ineluctably break through selective attention and distract people from an unrelated visual task (deviance distraction). This involuntary capture of attention by deviant sounds has been hypothesized to trigger their semantic appraisal and, in some circumstances, interfere with ongoing performance, but it remains unclear how such processing compares with the automatic processing of distractors in classic interference tasks (e.g., Stroop, flanker, Simon tasks). Using a cross-modal oddball task, we assessed the involuntary semantic processing of deviant sounds in the presence and absence of deviance distraction. The results revealed that some involuntary semantic analysis of spoken distractors occurs in the absence of deviance distraction but that this processing is significantly greater in its presence. We conclude that the automatic processing of spoken distractors reflects 2 contributions, one that is contingent upon deviance distraction and one that is independent from it.

  8. A Semantic Analysis of the Language of Advertising | Emodi | African ...

    African Journals Online (AJOL)

    A Semantic Analysis of the Language of Advertising. ... After a brief introduction to semantics and advertising language, the paper is focused on the linguistic realizations in English advertising from the semantic ... AJOL African Journals Online.

  9. Native Language Spoken as a Risk Marker for Tooth Decay.

    Science.gov (United States)

    Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M

    2015-01-01

    The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.

  10. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  11. Introduction to Semantic Web Ontology Languages

    NARCIS (Netherlands)

    Antoniou, Grigoris; Franconi, Enrico; Van Harmelen, Frank

    2005-01-01

    The aim of this chapter is to give a general introduction to some of the ontology languages that play a prominent role on the Semantic Web, and to discuss the formal foundations of these languages. Web ontology languages will be the main carriers of the information that we will want to share and

  12. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  13. Semantic computing and language knowledge bases

    Science.gov (United States)

    Wang, Lei; Wang, Houfeng; Yu, Shiwen

    2017-09-01

    As the proposition of the next-generation Web - semantic Web, semantic computing has been drawing more and more attention within the circle and the industries. A lot of research has been conducted on the theory and methodology of the subject, and potential applications have also been investigated and proposed in many fields. The progress of semantic computing made so far cannot be detached from its supporting pivot - language resources, for instance, language knowledge bases. This paper proposes three perspectives of semantic computing from a macro view and describes the current status of affairs about the construction of language knowledge bases and the related research and applications that have been carried out on the basis of these resources via a case study in the Institute of Computational Linguistics at Peking University.

  14. Development of Mandarin spoken language after pediatric cochlear implantation.

    Science.gov (United States)

    Li, Bei; Soli, Sigfrid D; Zheng, Yun; Li, Gang; Meng, Zhaoli

    2014-07-01

    The purpose of this study was to evaluate early spoken language development in young Mandarin-speaking children during the first 24 months after cochlear implantation, as measured by receptive and expressive vocabulary growth rates. Growth rates were compared with those of normally hearing children and with growth rates for English-speaking children with cochlear implants. Receptive and expressive vocabularies were measured with the simplified short form (SSF) version of the Mandarin Communicative Development Inventory (MCDI) in a sample of 112 pediatric implant recipients at baseline, 3, 6, 12, and 24 months after implantation. Implant ages ranged from 1 to 5 years. Scores were expressed in terms of normal equivalent ages, allowing normalized vocabulary growth rates to be determined. Scores for English-speaking children were re-expressed in these terms, allowing direct comparisons of Mandarin and English early spoken language development. Vocabulary growth rates during the first 12 months after implantation were similar to those for normally hearing children less than 16 months of age. Comparisons with growth rates for normally hearing children 16-30 months of age showed that the youngest implant age group (1-2 years) had an average growth rate of 0.68 that of normally hearing children; while the middle implant age group (2-3 years) had an average growth rate of 0.65; and the oldest implant age group (>3 years) had an average growth rate of 0.56, significantly less than the other two rates. Growth rates for English-speaking children with cochlear implants were 0.68 in the youngest group, 0.54 in the middle group, and 0.57 in the oldest group. Growth rates in the middle implant age groups for the two languages differed significantly. The SSF version of the MCDI is suitable for assessment of Mandarin language development during the first 24 months after cochlear implantation. Effects of implant age and duration of implantation can be compared directly across

  15. Language and Culture in the Multiethnic Community: Spoken Language Assessment.

    Science.gov (United States)

    Matluck, Joseph H.; Mace-Matluck, Betty J.

    This paper discusses the sociolinguistic problems inherent in multilingual testing, and the accompanying dangers of cultural bias in either the visuals or the language used in a given test. The first section discusses English-speaking Americans' perception of foreign speakers in terms of: (1) physical features; (2) speech, specifically vocabulary,…

  16. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    Science.gov (United States)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  17. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.

    2016-01-01

    BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  18. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2016-01-01

    Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  19. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  20. Attentional Capture of Objects Referred to by Spoken Language

    Science.gov (United States)

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  1. Bayesian natural language semantics and pragmatics

    CERN Document Server

    Zeevat, Henk

    2015-01-01

    The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice's contributions to pragmatics or in interpretation by abduction.

  2. Grasp it loudly! Supporting actions with semantically congruent spoken action words.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Evidence for cross-talk between motor and language brain structures has accumulated over the past several years. However, while a significant amount of research has focused on the interaction between language perception and action, little attention has been paid to the potential impact of language production on overt motor behaviour. The aim of the present study was to test whether verbalizing during a grasp-to-displace action would affect motor behaviour and, if so, whether this effect would depend on the semantic content of the pronounced word (Experiment I. Furthermore, we sought to test the stability of such effects in a different group of participants and investigate at which stage of the motor act language intervenes (Experiment II. For this, participants were asked to reach, grasp and displace an object while overtly pronouncing verbal descriptions of the action ("grasp" and "put down" or unrelated words (e.g. "butterfly" and "pigeon". Fine-grained analyses of several kinematic parameters such as velocity peaks revealed that when participants produced action-related words their movements became faster compared to conditions in which they did not verbalize or in which they produced words that were not related to the action. These effects likely result from the functional interaction between semantic retrieval of the words and the planning and programming of the action. Therefore, links between (action language and motor structures are significant to the point that language can refine overt motor behaviour.

  3. The employment of a spoken language computer applied to an air traffic control task.

    Science.gov (United States)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  4. Iconicity as a general property of language: evidence from spoken and signed languages

    Directory of Open Access Journals (Sweden)

    Pamela Perniss

    2010-12-01

    Full Text Available Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to hook up to motor and perceptual experience.

  5. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  6. Language and culture modulate online semantic processing.

    Science.gov (United States)

    Ellis, Ceri; Kuipers, Jan R; Thierry, Guillaume; Lovett, Victoria; Turnbull, Oliver; Jones, Manon W

    2015-10-01

    Language has been shown to influence non-linguistic cognitive operations such as colour perception, object categorization and motion event perception. Here, we show that language also modulates higher level processing, such as semantic knowledge. Using event-related brain potentials, we show that highly fluent Welsh-English bilinguals require significantly less processing effort when reading sentences in Welsh which contain factually correct information about Wales, than when reading sentences containing the same information presented in English. Crucially, culturally irrelevant information was processed similarly in both Welsh and English. Our findings show that even in highly proficient bilinguals, language interacts with factors associated with personal identity, such as culture, to modulate online semantic processing. © The Author (2015). Published by Oxford University Press.

  7. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  8. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    Science.gov (United States)

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  9. CASL - The CoFI Algebraic Specification Language - Semantics

    DEFF Research Database (Denmark)

    Haxthausen, Anne

    1999-01-01

    This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the second complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop in Amsterdam in March 1999.......This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the second complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop in Amsterdam in March 1999....

  10. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  11. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. RECEPTION OF SPOKEN ENGLISH. MISHEARINGS IN THE LANGUAGE OF BUSINESS AND LAW

    Directory of Open Access Journals (Sweden)

    HOREA Ioana-Claudia

    2013-07-01

    Full Text Available Spoken English may sometimes cause us to face a peculiar problem in respect of the reception and the decoding of auditive signals, which might lead to mishearings. Risen from erroneous perception, from a lack in understanding the communication and an involuntary mental replacement of a certain element or structure by a more familiar one, these mistakes are most frequently encountered in the case of listening to songs, where the melodic line can facilitate the development of confusion by its somewhat altered intonation, which produces the so called mondegreens. Still, instances can be met in all domains of verbal communication, as proven in several examples noticed during classes of English as a foreign language (EFL taught to non-philological subjects. Production and perceptions of language depend on a series of elements that influence the encoding and the decoding of the message. These filters belong to both psychological and semantic categories which can either interfere with the accuracy of emission and reception. Poor understanding of a notion or concept combined with a more familiar relation with a similarly sounding one will result in unconsciously picking the structure which is better known. This means ‘hearing’ something else than it had been said, something closer to the receiver’s preoccupations and baggage of knowledge than the original structure or word. Some mishearings become particularly relevant as they concern teaching English for Specific Purposes (ESP. Such are those encountered during classes of Business English or in English for Law. Though not very likely to occur too often, given an intuitively felt inaccuracy - as the terms are known by the users to need to be more specialised -, such examples are still not ignorable. Thus, we consider they deserve a higher degree of attention, as they might become quite relevant in the global context of an increasing work force migration and a spread of multinational companies.

  13. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  14. SPOKEN-LANGUAGE FEATURES IN CASUAL CONVERSATION A Case of EFL Learners‘ Casual Conversation

    Directory of Open Access Journals (Sweden)

    Aris Novi

    2017-12-01

    Full Text Available Spoken text differs from written one in its features of context dependency, turn-taking organization, and dynamic structure. EFL learners; however, sometime find it difficult to produce typical characteristics of spoken language, particularly in casual talk. When they are asked to conduct a conversation, some of them tend to be script-based which is considered unnatural. Using the theory of Thornburry (2005, this paper aims to analyze characteristics of spoken language in casual conversation which cover spontaneity, interactivity, interpersonality, and coherence. This study used discourse analysis to reveal four features in turns and moves of three casual conversations. The findings indicate that not all sub-features used in the conversation. In this case, the spontaneity features were used 132 times; the interactivity features were used 1081 times; the interpersonality features were used 257 times; while the coherence features (negotiation features were used 526 times. Besides, the results also present that some participants seem to dominantly produce some sub-features naturally and vice versa. Therefore, this finding is expected to be beneficial to provide a model of how spoken interaction should be carried out. More importantly, it could raise English teachers or lecturers‘ awareness in teaching features of spoken language, so that, the students could develop their communicative competence as the native speakers of English do.

  15. A layered semantics for a parallel object-oriented language

    NARCIS (Netherlands)

    P.H.M. America (Pierre); J.J.M.M. Rutten (Jan)

    1990-01-01

    textabstractWe develop a denotational semantics for POOL, a parallel object-oriented programming language. The main contribution of this semantics is an accurate mathematical model of the most important concept in object-oriented programming: the object. This is achieved by structuring the semantics

  16. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  17. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    Science.gov (United States)

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    Science.gov (United States)

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  19. What Comes First, What Comes Next: Information Packaging in Written and Spoken Language

    Directory of Open Access Journals (Sweden)

    Vladislav Smolka

    2017-07-01

    Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.

  20. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    Science.gov (United States)

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  1. Semantic facilitation in bilingual first language acquisition.

    Science.gov (United States)

    Bilson, Samuel; Yoshida, Hanako; Tran, Crystal D; Woods, Elizabeth A; Hills, Thomas T

    2015-07-01

    Bilingual first language learners face unique challenges that may influence the rate and order of early word learning relative to monolinguals. A comparison of the productive vocabularies of 435 children between the ages of 6 months and 7 years-181 of which were bilingual English learners-found that monolinguals learned both English words and all-language concepts faster than bilinguals. However, bilinguals showed an enhancement of an effect previously found in monolinguals-the preference for learning words with more associative cues. Though both monolinguals and bilinguals were best fit by a similar model of word learning, semantic network structure and growth indicated that the two groups were learning English words in a different order. Further, in comparison with a model of two-monolinguals-in-one-mind, bilinguals overproduced translational equivalents. Our results support an emergent account of bilingual first language acquisition, where learning a word in one language facilitates its acquisition in a second language. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. LEARNING SEMANTICS-ENHANCED LANGUAGE MODELS APPLIED TO UNSUEPRVISED WSD

    Energy Technology Data Exchange (ETDEWEB)

    VERSPOOR, KARIN [Los Alamos National Laboratory; LIN, SHOU-DE [Los Alamos National Laboratory

    2007-01-29

    An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learned without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.

  3. Semantics by levels: An example for an image language

    International Nuclear Information System (INIS)

    Fasciano, M.; Levialdi, S.; Tortora, G.

    1984-01-01

    Ambiguities in formal language constructs may decrease both the understanding and the coding efficiency of a program. Within an image language, two semantic levels have been detected, corresponding to the lower level (pixel-based) and to the higher level (image-based). Denotational semantics has been used to define both levels within PIXAL (an image language) in order to enable the reader to visualize a concrete application of the semantic levels and their implications in a programming environment. This paper presents the semantics of different levels of conceptualization in the abstract formal description of an image language. The disambiguation of the meaning of special purpose constructs that imply either the elementary (pixels) level or the high image (array) level is naturally obtained by means of such semantic clauses. Perhaps non Von architectures on which hierarchical computations may be performed could also benefit from the use of semantic clauses to explicit the different levels where such computations are executed

  4. THE IMPLEMENTATION OF COMMUNICATIVE LANGUAGE TEACHING (CLT TO TEACH SPOKEN RECOUNTS IN SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Eri Rusnawati

    2016-10-01

    Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa

  5. On-Line Syntax: Thoughts on the Temporality of Spoken Language

    Science.gov (United States)

    Auer, Peter

    2009-01-01

    One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…

  6. Acquisition of graphic communication by a young girl without comprehension of spoken language.

    Science.gov (United States)

    von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R

    To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.

  7. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    Science.gov (United States)

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  8. Personality Structure in the Trait Lexicon of Hindi, a Major Language Spoken in India

    NARCIS (Netherlands)

    Singh, Jitendra K.; Misra, Girishwar; De Raad, Boele

    2013-01-01

    The psycho-lexical approach is extended to Hindi, a major language spoken in India. From both the dictionary and from Hindi novels, a huge set of personality descriptors was put together, ultimately reduced to a manageable set of 295 trait terms. Both self and peer ratings were collected on those

  9. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    Science.gov (United States)

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  10. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  11. Phonotactic spoken language identification with limited training data

    CSIR Research Space (South Africa)

    Peche, M

    2007-08-01

    Full Text Available The authors investigate the addition of a new language, for which limited resources are available, to a phonotactic language identification system. Two classes of approaches are studied: in the first class, only existing phonetic recognizers...

  12. Using semantic analysis to improve speech recognition performance

    OpenAIRE

    Erdoğan, Hakan; Erdogan, Hakan; Sarıkaya, Ruhi; Sarikaya, Ruhi; Chen, Stanley F.; Gao, Yuqing; Picheny, Michael

    2005-01-01

    Although syntactic structure has been used in recent work in language modeling, there has not been much effort in using semantic analysis for language models. In this study, we propose three new language modeling techniques that use semantic analysis for spoken dialog systems. We call these methods concept sequence modeling, two-level semantic-lexical modeling, and joint semantic-lexical modeling. These models combine lexical information with varying amounts of semantic information, using ann...

  13. Cochlear implants and spoken language processing abilities: Review and assessment of the literature

    OpenAIRE

    Peterson, Nathaniel R.; Pisoni, David B.; Miyamoto, Richard T.

    2010-01-01

    Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading...

  14. Usable, Real-Time, Interactive Spoken Language Systems

    Science.gov (United States)

    1994-09-01

    Similarly, we included derivations (mostly plurals and possessives) of many open-class words in the domnain. We also added about 400 concatenated word...UueraiCe’l~ usinig a system of’ ’realization 1111C, %%. hiCh map) thle gr-aimmlatcal relation anl argumlent bears to the head onto thle semantic relatio ...syntactic categories as well. Representations of this form contain significantly more internal structure than specialized sublanguage models. This can be

  15. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  16. Retinoic acid signaling: a new piece in the spoken language puzzle

    Directory of Open Access Journals (Sweden)

    Jon-Ruben eVan Rhijn

    2015-11-01

    Full Text Available Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms that encode these pathways will shed light on how humans can effortlessly and innately use spoken language and could elucidate what goes wrong in speech-language disorders.FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that also includes receptive and expressive language impairments. The underlying neuro-molecular mechanisms controlled by FOXP2, which will give insight into our capacity for speech-motor control, are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid signaling and to modify the cellular response to retinoic acid, a key regulator of brain development. Herein we explore the evidence that FOXP2 and retinoic acid signaling function in the same pathways. We present evidence at molecular, cellular and behavioral levels that suggest an interplay between FOXP2 and retinoic acid that may be important for fine motor control and speech-motor output. We propose that retinoic acid signaling is an exciting new angle from which to investigate how neurogenetic mechanisms can contribute to the (spoken language ready brain.

  17. Predictors of spoken language development following pediatric cochlear implantation.

    Science.gov (United States)

    Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid

    2012-01-01

    Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple

  18. The interaction of lexical semantics and cohort competition in spoken word recognition: an fMRI study.

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K

    2011-12-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.

  19. ORIGINAL ARTICLES How do doctors learn the spoken language of ...

    African Journals Online (AJOL)

    2009-07-01

    Jul 1, 2009 ... and cultural metaphors of illness as part of language learning. The theory of .... role.21 Even in a military setting, where soldiers learnt Korean or Spanish as part of ... own language – a cross-cultural survey. Brit J Gen Pract ...

  20. Predictors of Spoken Language Development Following Pediatric Cochlear Implantation

    NARCIS (Netherlands)

    Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters

    2012-01-01

    Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to

  1. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  2. A Descriptive Study of Registers Found in Spoken and Written Communication (A Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Nurul Hidayah

    2016-07-01

    Full Text Available This research is descriptive study of registers found in spoken and written communication. The type of this research is Descriptive Qualitative Research. In this research, the data of the study is register in spoken and written communication that are found in a book entitled "Communicating! Theory and Practice" and from internet. The data can be in the forms of words, phrases and abbreviation. In relation with method of collection data, the writer uses the library method as her instrument. The writer relates it to the study of register in spoken and written communication. The technique of analyzing the data using descriptive method. The types of register in this term will be separated into formal register and informal register, and identify the meaning of register.

  3. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    Science.gov (United States)

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  4. Semantic framework for mapping object-oriented model to semantic web languages.

    Science.gov (United States)

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.

  5. The interface between spoken and written language: developmental disorders.

    Science.gov (United States)

    Hulme, Charles; Snowling, Margaret J

    2014-01-01

    We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter-sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills).

  6. Principal semantic components of language and the measurement of meaning.

    Science.gov (United States)

    Samsonovich, Alexei V; Samsonovic, Alexei V; Ascoli, Giorgio A

    2010-06-11

    Metric systems for semantics, or semantic cognitive maps, are allocations of words or other representations in a metric space based on their meaning. Existing methods for semantic mapping, such as Latent Semantic Analysis and Latent Dirichlet Allocation, are based on paradigms involving dissimilarity metrics. They typically do not take into account relations of antonymy and yield a large number of domain-specific semantic dimensions. Here, using a novel self-organization approach, we construct a low-dimensional, context-independent semantic map of natural language that represents simultaneously synonymy and antonymy. Emergent semantics of the map principal components are clearly identifiable: the first three correspond to the meanings of "good/bad" (valence), "calm/excited" (arousal), and "open/closed" (freedom), respectively. The semantic map is sufficiently robust to allow the automated extraction of synonyms and antonyms not originally in the dictionaries used to construct the map and to predict connotation from their coordinates. The map geometric characteristics include a limited number ( approximately 4) of statistically significant dimensions, a bimodal distribution of the first component, increasing kurtosis of subsequent (unimodal) components, and a U-shaped maximum-spread planar projection. Both the semantic content and the main geometric features of the map are consistent between dictionaries (Microsoft Word and Princeton's WordNet), among Western languages (English, French, German, and Spanish), and with previously established psychometric measures. By defining the semantics of its dimensions, the constructed map provides a foundational metric system for the quantitative analysis of word meaning. Language can be viewed as a cumulative product of human experiences. Therefore, the extracted principal semantic dimensions may be useful to characterize the general semantic dimensions of the content of mental states. This is a fundamental step toward a

  7. Principal semantic components of language and the measurement of meaning.

    Directory of Open Access Journals (Sweden)

    Alexei V Samsonovich

    Full Text Available Metric systems for semantics, or semantic cognitive maps, are allocations of words or other representations in a metric space based on their meaning. Existing methods for semantic mapping, such as Latent Semantic Analysis and Latent Dirichlet Allocation, are based on paradigms involving dissimilarity metrics. They typically do not take into account relations of antonymy and yield a large number of domain-specific semantic dimensions. Here, using a novel self-organization approach, we construct a low-dimensional, context-independent semantic map of natural language that represents simultaneously synonymy and antonymy. Emergent semantics of the map principal components are clearly identifiable: the first three correspond to the meanings of "good/bad" (valence, "calm/excited" (arousal, and "open/closed" (freedom, respectively. The semantic map is sufficiently robust to allow the automated extraction of synonyms and antonyms not originally in the dictionaries used to construct the map and to predict connotation from their coordinates. The map geometric characteristics include a limited number ( approximately 4 of statistically significant dimensions, a bimodal distribution of the first component, increasing kurtosis of subsequent (unimodal components, and a U-shaped maximum-spread planar projection. Both the semantic content and the main geometric features of the map are consistent between dictionaries (Microsoft Word and Princeton's WordNet, among Western languages (English, French, German, and Spanish, and with previously established psychometric measures. By defining the semantics of its dimensions, the constructed map provides a foundational metric system for the quantitative analysis of word meaning. Language can be viewed as a cumulative product of human experiences. Therefore, the extracted principal semantic dimensions may be useful to characterize the general semantic dimensions of the content of mental states. This is a fundamental step

  8. Loops of Spoken Language i Danish Broadcasting Corporation News

    DEFF Research Database (Denmark)

    le Fevre Jakobsen, Bjarne

    2012-01-01

    The tempo of Danish television news broadcasts has changed markedly over the past 40 years, while the language has essentially always been conservative, and remains so today. The development in the tempo of the broadcasts has gone through a number of phases from a newsreader in a rigid structure...

  9. IMPACT ON THE INDIGENOUS LANGUAGES SPOKEN IN NIGERIA ...

    African Journals Online (AJOL)

    In the face of globalisation, the scale of communication is increasing from being merely .... capital goods and services across national frontiers involving too, political contexts of ... auditory and audiovisual entertainment, the use of English dominates. The language .... manners, entertainment, sports, the legal system, etc.

  10. A Derivational Approach to the Operational Semantics of Functional Languages

    DEFF Research Database (Denmark)

    Biernacka, Malgorzata

    We study the connections between different forms of operational semantics for functional programming languages and we present systematic methods of interderiving reduction semantics, abstract machines and higher-order evaluators. We first consider two methods based on program transformations: a s...

  11. A step beyond local observations with a dialog aware bidirectional GRU network for Spoken Language Understanding

    OpenAIRE

    Vukotic , Vedran; Raymond , Christian; Gravier , Guillaume

    2016-01-01

    International audience; Architectures of Recurrent Neural Networks (RNN) recently become a very popular choice for Spoken Language Understanding (SLU) problems; however, they represent a big family of different architectures that can furthermore be combined to form more complex neural networks. In this work, we compare different recurrent networks, such as simple Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Gated Memory Units (GRU) and their bidirectional versions,...

  12. Grammaticalization and Semantic Maps: Evidence from Artificial Language

    Directory of Open Access Journals (Sweden)

    Remi van Trijp

    2010-01-01

    Full Text Available Semantic maps have offered linguists an appealing and empirically rooted methodology for describing recurrent structural patterns in language development and the multifunctionality of grammatical categories. Although some researchers argue that semantic maps are universal and given, others provide evidence that there are no fixed or universal maps. This paper takes the position that semantic maps are a useful way to visualize the grammatical evolution of a language (particularly the evolution of semantic structuring but that this grammatical evolution is a consequence of distributed processes whereby language users shape and reshape their language. So it is a challenge to find out what these processes are and whether they indeed generate the kind of semantic maps observed for human languages. This work takes a design stance towards the question of the emergence of linguistic structure and investigates how grammar can be formed in populations of autonomous artificial ?agents? that play ?language games? with each other about situations they perceive through a sensori-motor embodiment. The experiments reported here investigate whether semantic maps for case markers could emerge through grammaticalization processes without the need for a universal conceptual space.

  13. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  14. The Attitudes and Motivation of Children towards Learning Rarely Spoken Foreign Languages: A Case Study from Saudi Arabia

    Science.gov (United States)

    Al-Nofaie, Haifa

    2018-01-01

    This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…

  15. Language networks associated with computerized semantic indices.

    Science.gov (United States)

    Pakhomov, Serguei V S; Jones, David T; Knopman, David S

    2015-01-01

    Tests of generative semantic verbal fluency are widely used to study organization and representation of concepts in the human brain. Previous studies demonstrated that clustering and switching behavior during verbal fluency tasks is supported by multiple brain mechanisms associated with semantic memory and executive control. Previous work relied on manual assessments of semantic relatedness between words and grouping of words into semantic clusters. We investigated a computational linguistic approach to measuring the strength of semantic relatedness between words based on latent semantic analysis of word co-occurrences in a subset of a large online encyclopedia. We computed semantic clustering indices and compared them to brain network connectivity measures obtained with task-free fMRI in a sample consisting of healthy participants and those differentially affected by cognitive impairment. We found that semantic clustering indices were associated with brain network connectivity in distinct areas including fronto-temporal, fronto-parietal and fusiform gyrus regions. This study shows that computerized semantic indices complement traditional assessments of verbal fluency to provide a more complete account of the relationship between brain and verbal behavior involved organization and retrieval of lexical information from memory. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. The missing foundation in teacher education: Knowledge of the structure of spoken and written language.

    Science.gov (United States)

    Moats, L C

    1994-01-01

    Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.

  17. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    Science.gov (United States)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  18. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  19. Categorical model of structural operational semantics for imperative language

    Directory of Open Access Journals (Sweden)

    William Steingartner

    2016-12-01

    Full Text Available Definition of programming languages consists of the formal definition of syntax and semantics. One of the most popular semantic methods used in various stages of software engineering is structural operational semantics. It describes program behavior in the form of state changes after execution of elementary steps of program. This feature makes structural operational semantics useful for implementation of programming languages and also for verification purposes. In our paper we present a new approach to structural operational semantics. We model behavior of programs in category of states, where objects are states, an abstraction of computer memory and morphisms model state changes, execution of a program in elementary steps. The advantage of using categorical model is its exact mathematical structure with many useful proved properties and its graphical illustration of program behavior as a path, i.e. a composition of morphisms. Our approach is able to accentuate dynamics of structural operational semantics. For simplicity, we assume that data are intuitively typed. Visualization and facility of our model is  not only  a  new model of structural operational semantics of imperative programming languages but it can also serve for education purposes.

  20. Rewriting Logic Semantics of a Plan Execution Language

    Science.gov (United States)

    Dowek, Gilles; Munoz, Cesar A.; Rocha, Camilo

    2009-01-01

    The Plan Execution Interchange Language (PLEXIL) is a synchronous language developed by NASA to support autonomous spacecraft operations. In this paper, we propose a rewriting logic semantics of PLEXIL in Maude, a high-performance logical engine. The rewriting logic semantics is by itself a formal interpreter of the language and can be used as a semantic benchmark for the implementation of PLEXIL executives. The implementation in Maude has the additional benefit of making available to PLEXIL designers and developers all the formal analysis and verification tools provided by Maude. The formalization of the PLEXIL semantics in rewriting logic poses an interesting challenge due to the synchronous nature of the language and the prioritized rules defining its semantics. To overcome this difficulty, we propose a general procedure for simulating synchronous set relations in rewriting logic that is sound and, for deterministic relations, complete. We also report on the finding of two issues at the design level of the original PLEXIL semantics that were identified with the help of the executable specification in Maude.

  1. Spoken language development in oral preschool children with permanent childhood deafness.

    Science.gov (United States)

    Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.

  2. Spoken language achieves robustness and evolvability by exploiting degeneracy and neutrality.

    Science.gov (United States)

    Winter, Bodo

    2014-10-01

    As with biological systems, spoken languages are strikingly robust against perturbations. This paper shows that languages achieve robustness in a way that is highly similar to many biological systems. For example, speech sounds are encoded via multiple acoustically diverse, temporally distributed and functionally redundant cues, characteristics that bear similarities to what biologists call "degeneracy". Speech is furthermore adequately characterized by neutrality, with many different tongue configurations leading to similar acoustic outputs, and different acoustic variants understood as the same by recipients. This highlights the presence of a large neutral network of acoustic neighbors for every speech sound. Such neutrality ensures that a steady backdrop of variation can be maintained without impeding communication, assuring that there is "fodder" for subsequent evolution. Thus, studying linguistic robustness is not only important for understanding how linguistic systems maintain their functioning upon the background of noise, but also for understanding the preconditions for language evolution. © 2014 WILEY Periodicals, Inc.

  3. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  4. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    Science.gov (United States)

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  5. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  6. The Varieties of Programming Language Semantics (and Their Uses)

    DEFF Research Database (Denmark)

    Mosses, Peter David

    2001-01-01

    ; and regular expressions are extensively used for searching and transforming text. In contrast, formal semantic descriptions are widely regarded as being of interest only to theoreticians. This paper surveys the main frameworks available for describing the dynamic semantics of programming languages......Formal descriptions of syntax are quite popular: regular and context-free grammars have become accepted as useful for documenting the syntax of programming languages, as well as for generating efficient parsers; attribute grammars allow parsing to be linked with typechecking and code generation...

  7. Relaxed Operational Semantics of Concurrent Programming Languages

    Directory of Open Access Journals (Sweden)

    Gustavo Petri

    2012-08-01

    Full Text Available We propose a novel, operational framework to formally describe the semantics of concurrent programs running within the context of a relaxed memory model. Our framework features a "temporary store" where the memory operations issued by the threads are recorded, in program order. A memory model then specifies the conditions under which a pending operation from this sequence is allowed to be globally performed, possibly out of order. The memory model also involves a "write grain," accounting for architectures where a thread may read a write that is not yet globally visible. Our formal model is supported by a software simulator, allowing us to run litmus tests in our semantics.

  8. LAIR: A Language for Automated Semantics-Aware Text Sanitization based on Frame Semantics

    DEFF Research Database (Denmark)

    Hedegaard, Steffen; Houen, Søren; Simonsen, Jakob Grue

    2009-01-01

    We present \\lair{}: A domain-specific language that enables users to specify actions to be taken upon meeting specific semantic frames in a text, in particular to rephrase and redact the textual content. While \\lair{} presupposes superficial knowledge of frames and frame semantics, it requires on...... with automated redaction of web pages for subjectively undesirable content; initial experiments suggest that using a small language based on semantic recognition of undesirable terms can be highly useful as a supplement to traditional methods of text sanitization.......We present \\lair{}: A domain-specific language that enables users to specify actions to be taken upon meeting specific semantic frames in a text, in particular to rephrase and redact the textual content. While \\lair{} presupposes superficial knowledge of frames and frame semantics, it requires only...... limited prior programming experience. It neither contain scripting or I/O primitives, nor does it contain general loop constructions and is not Turing-complete. We have implemented a \\lair{} compiler and integrated it in a pipeline for automated redaction of web pages. We detail our experience...

  9. The effect of written text on comprehension of spoken English as a foreign language.

    Science.gov (United States)

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.

  10. Graph Transformation Semantics for a QVT Language

    NARCIS (Netherlands)

    Rensink, Arend; Nederpel, Ronald; Bruni, Roberto; Varró, Dániel

    It has been claimed by many in the graph transformation community that model transformation, as understood in the context of Model Driven Architecture, can be seen as an application of graph transformation. In this paper we substantiate this claim by giving a graph transformation-based semantics to

  11. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  12. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  13. Semantic Models of Sentences with Verbs of Motion in Standard Language and in Scientific Language Used in Biology

    Directory of Open Access Journals (Sweden)

    Vita Banionytė

    2016-06-01

    Full Text Available The semantic models of sentences with verbs of motion in German standard language and in scientific language used in biology are analyzed in the article. In its theoretic part it is affirmed that the article is based on the semantic theory of the sentence. This theory, in its turn, is grounded on the correlation of semantic predicative classes and semantic roles. The combination of semantic predicative classes and semantic roles is expressed by the main semantic formula – proposition. In its practical part the differences between the semantic models of standard and scientific language used in biology are explained. While modelling sentences with verbs of motion, two groups of semantic models of sentences are singled out: that of action (Handlung and process (Vorgang. The analysis shows that the semantic models of sentences with semantic action predicatives dominate in the text of standard language while the semantic models of sentences with semantic process predicatives dominate in the texts of scientific language used in biology. The differences how the doer and direction are expressed in standard and in scientific language are clearly seen and the semantic cases (Agens, Patiens, Direktiv1 help to determine that. It is observed that in scientific texts of high level of specialization (biology science in contrast to popular scientific literature models of sentences with moving verbs are usually seldom found. They are substituted by denominative constructions. In conclusions it is shown that this analysis can be important in methodics, especially planning material for teaching professional-scientific language.

  14. Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words.

    Science.gov (United States)

    Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M

    2017-04-01

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    Science.gov (United States)

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  16. The relation of the number of languages spoken to performance in different cognitive abilities in old age.

    Science.gov (United States)

    Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias

    2016-12-01

    Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.

  17. Effects of early auditory experience on the spoken language of deaf children at 3 years of age.

    Science.gov (United States)

    Nicholas, Johanna Grant; Geers, Ann E

    2006-06-01

    By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44

  18. Cochlear implants and spoken language processing abilities: review and assessment of the literature.

    Science.gov (United States)

    Peterson, Nathaniel R; Pisoni, David B; Miyamoto, Richard T

    2010-01-01

    Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading). However, there is wide variation in individual outcomes following cochlear implantation, and some CI recipients never develop useable speech and oral language skills. The causes of this enormous variation in outcomes are only partly understood at the present time. The variables most strongly associated with language outcomes are age at implantation and mode of communication in rehabilitation. Thus, some of the more important factors determining success of cochlear implantation are broadly related to neural plasticity that appears to be transiently present in deaf individuals. In this article we review the expected outcomes of cochlear implantation, potential predictors of those outcomes, the basic science regarding critical and sensitive periods, and several new research directions in the field of cochlear implantation.

  19. Processing lexical semantic and syntactic information in first and second language: fMRI evidence from German and Russian.

    Science.gov (United States)

    Rüschemeyer, Shirley-Ann; Fiebach, Christian J; Kempe, Vera; Friederici, Angela D

    2005-06-01

    We introduce two experiments that explored syntactic and semantic processing of spoken sentences by native and non-native speakers. In the first experiment, the neural substrates corresponding to detection of syntactic and semantic violations were determined in native speakers of two typologically different languages using functional magnetic resonance imaging (fMRI). The results show that the underlying neural response of participants to stimuli across different native languages is quite similar. In the second experiment, we investigated how non-native speakers of a language process the same stimuli presented in the first experiment. First, the results show a more similar pattern of increased activation between native and non-native speakers in response to semantic violations than to syntactic violations. Second, the non-native speakers were observed to employ specific portions of the frontotemporal language network differently from those employed by native speakers. These regions included the inferior frontal gyrus (IFG), superior temporal gyrus (STG), and subcortical structures of the basal ganglia.

  20. Semantic similarity from natural language and ontology analysis

    CERN Document Server

    Harispe, Sébastien; Janaqi, Stefan

    2015-01-01

    Artificial Intelligence federates numerous scientific fields in the aim of developing machines able to assist human operators performing complex treatments---most of which demand high cognitive skills (e.g. learning or decision processes). Central to this quest is to give machines the ability to estimate the likeness or similarity between things in the way human beings estimate the similarity between stimuli.In this context, this book focuses on semantic measures: approaches designed for comparing semantic entities such as units of language, e.g. words, sentences, or concepts and instances def

  1. Rethinking spoken fluency

    OpenAIRE

    McCarthy, Michael

    2009-01-01

    This article re-examines the notion of spoken fluency. Fluent and fluency are terms commonly used in everyday, lay language, and fluency, or lack of it, has social consequences. The article reviews the main approaches to understanding and measuring spoken fluency and suggest that spoken fluency is best understood as an interactive achievement, and offers the metaphor of ‘confluence’ to replace the term fluency. Many measures of spoken fluency are internal and monologue-based, whereas evidence...

  2. Semantic structures advances in natural language processing

    CERN Document Server

    Waltz, David L

    2014-01-01

    Natural language understanding is central to the goals of artificial intelligence. Any truly intelligent machine must be capable of carrying on a conversation: dialogue, particularly clarification dialogue, is essential if we are to avoid disasters caused by the misunderstanding of the intelligent interactive systems of the future. This book is an interim report on the grand enterprise of devising a machine that can use natural language as fluently as a human. What has really been achieved since this goal was first formulated in Turing's famous test? What obstacles still need to be overcome?

  3. Young children make their gestural communication systems more language-like: segmentation and linearization of semantic elements in motion events.

    Science.gov (United States)

    Clay, Zanna; Pople, Sally; Hood, Bruce; Kita, Sotaro

    2014-08-01

    Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children's learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system. © The Author(s) 2014.

  4. Structural-semantic characteristic of phraseologisms in modern German language

    Directory of Open Access Journals (Sweden)

    Abramova Natalya Viktorovna

    2015-03-01

    Full Text Available The article is devoted to the structural and semantic characteristics of phraseology of the modern German language. It reveals the essence of the concept of “idioms”, discusses various classification of phraseological units in German. Many linguists offer a variety of phraseological units classification. It is studied in detailed the classification by B. Fleischer, where the following types of phraseological units are distinguished: nominative collocations, communication idioms, phrasal templates. V.V. Vinogradov classified phraseological units according to their degree of semantic fusion. He identified three major types of phraseological units: phraseological seam, phraseological unity and phraseological (non-free combination. M.D. Stepanova and I.I. Chernyshev worked out structural and semantic classification of phraseological units, consisting of three groups: phraseological units, phraseological combinations, phraseological expressions. A special group of phraseological combinations is of E. Agricola - stable phrases. H. Burger classifies idioms according to their function in the communication process: reference idioms, structural phraseological units, communication idioms. Each classification is provided with vivid examples that characterize the structure and semantics of phraseological units of modern German language.

  5. Objects as closures: Abstract semantics of object oriented languages

    Science.gov (United States)

    Reddy, Uday S.

    1989-01-01

    We discuss denotational semantics of object-oriented languages, using the concept of closure widely used in (semi) functional programming to encapsulate side effects. It is shown that this denotational framework is adequate to explain classes, instantiation, and inheritance in the style of Simula as well as SMALLTALK-80. This framework is then compared with that of Kamin, in his recent denotational definition of SMALLTALK-80, and the implications of the differences between the two approaches are discussed.

  6. Objects as closures - Abstract semantics of object oriented languages

    Science.gov (United States)

    Reddy, Uday S.

    1988-01-01

    The denotational semantics of object-oriented languages is discussed using the concept of closure widely used in (semi) functional programming to encapsulate side effects. It is shown that this denotational framework is adequate to explain classes, instantiation, and inheritance in the style of Simula as well as SMALLTALK-80. This framework is then compared with that of Kamin (1988), in his recent denotational definition of SMALLTALK-80, and the implications of the differences between the two approaches are discussed.

  7. Foreign body aspiration and language spoken at home: 10-year review.

    Science.gov (United States)

    Choroomi, S; Curotta, J

    2011-07-01

    To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.

  8. Building a semantic search engine with games and crowdsourcing

    OpenAIRE

    Wieser, Christoph

    2014-01-01

    Semantic search engines aim at improving conventional search with semantic information, or meta-data, on the data searched for and/or on the searchers. So far, approaches to semantic search exploit characteristics of the searchers like age, education, or spoken language for selecting and/or ranking search results. Such data allow to build up a semantic search engine as an extension of a conventional search engine. The crawlers of well established search engines like Google, Yahoo! or Bing ...

  9. Language configurations of degree-related denotations in the spoken production of a group of Colombian EFL university students: A corpus-based study

    Directory of Open Access Journals (Sweden)

    Wilder Yesid Escobar

    2015-05-01

    Full Text Available Recognizing that developing the competences needed to appropriately use linguistic resources according to contextual characteristics (pragmatics is as important as the cultural-imbedded linguistic knowledge itself (semantics and that both are equally essential to form competent speakers of English in foreign language contexts, we feel this research relies on corpus linguistics to analyze both the scope and the limitations of the sociolinguistic knowledge and the communicative skills of English students at the university level. To such end, a linguistic corpus was assembled, compared to an existing corpus of native speakers, and analyzed in terms of the frequency, overuse, underuse, misuse, ambiguity, success, and failure of the linguistic parameters used in speech acts. The findings herein describe the linguistic configurations employed to modify levels and degrees of descriptions (salient sematic theme exhibited in the EFL learners´ corpus appealing to the sociolinguistic principles governing meaning making and language use which are constructed under the social conditions of the environments where the language is naturally spoken for sociocultural exchange.

  10. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  11. Graph-based Operational Semantics of a Lazy Functional Languages

    DEFF Research Database (Denmark)

    Rose, Kristoffer Høgsbro

    1992-01-01

    Presents Graph Operational Semantics (GOS): a semantic specification formalism based on structural operational semantics and term graph rewriting. Demonstrates the method by specifying the dynamic ...

  12. Language, Semantics, and Methods for Security Protocols

    DEFF Research Database (Denmark)

    Crazzolara, Federico

    events. Methods like strand spaces and the inductive method of Paulson have been designed to support an intensional, event-based, style of reasoning. These methods have successfully tackled a number of protocols though in an ad hoc fashion. They make an informal spring from a protocol to its......-nets. They have persistent conditions and as we show in this thesis, unfold under reasonable assumptions to a more basic kind of nets. We relate SPL-nets to strand spaces and inductive rules, as well as trace languages and event structures so unifying a range of approaches, as well as providing conditions under...... reveal. The last few years have seen the emergence of successful intensional, event-based, formal approaches to reasoning about security protocols. The methods are concerned with reasoning about the events that a security protocol can perform, and make use of a causal dependency that exists between...

  13. Verbal short-term memory development and spoken language outcomes in deaf children with cochlear implants.

    Science.gov (United States)

    Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B

    2013-01-01

    Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures

  14. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach

    Science.gov (United States)

    Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546

  15. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    Science.gov (United States)

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  16. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    Science.gov (United States)

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  17. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  18. Flavours of XChange, a Rule-Based Reactive Language for the (Semantic) Web

    OpenAIRE

    Bailey, James; Bry, François; Eckert, Michael; Patrânjan, Paula Lavinia

    2005-01-01

    This article introduces XChange, a rule-based reactive language for the Web. Stressing application scenarios, it first argues that high-level reactive languages are needed for bothWeb and SemanticWeb applications. Then, it discusses technologies and paradigms relevant to high-level reactive languages for the (Semantic) Web. Finally, it presents the Event-Condition-Action rules of XChange.

  19. Identification of four class emotion from Indonesian spoken language using acoustic and lexical features

    Science.gov (United States)

    Kasyidi, Fatan; Puji Lestari, Dessi

    2018-03-01

    One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.

  20. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    Science.gov (United States)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners

  1. THE INFLUENCE OF LANGUAGE USE AND LANGUAGE ATTITUDE ON THE MAINTENANCE OF COMMUNITY LANGUAGES SPOKEN BY MIGRANT STUDENTS

    Directory of Open Access Journals (Sweden)

    Leni Amalia Suek

    2014-05-01

    Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.

  2. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates.

    Science.gov (United States)

    Petkov, Christopher I; Jarvis, Erich D

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.

  3. Development of a spoken language identification system for South African languages

    CSIR Research Space (South Africa)

    Peché, M

    2009-12-01

    Full Text Available , and complicates the design of the system as a whole. Current benchmark results are established by the National Institute of Standards and Technology (NIST) Language Recognition Evaluation (LRE) [12]. Initially started in 1996, the next evaluation was in 2003..., Gunnar Evermann, Mark Gales, Thomas Hain, Dan Kershaw, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, Valtcho Valtchev, and Phil Woodland: “The HTK book. Revised for HTK version 3.3”, Online: http://htk.eng.cam.ac.uk/., 2005. [11] M.A. Zissman...

  4. Auditory semantic processing in dichotic listening: effects of competing speech, ear of presentation, and sentential bias on N400s to spoken words in context.

    Science.gov (United States)

    Carey, Daniel; Mercure, Evelyne; Pizzioli, Fabrizio; Aydelott, Jennifer

    2014-12-01

    The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

    Science.gov (United States)

    Koeritzer, Margaret A.; Rogers, Chad S.; Van Engen, Kristin J.; Peelle, Jonathan E.

    2018-01-01

    Purpose: The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. Method: We tested 30 young adults and 30 older adults. Participants heard lists of sentences in…

  6. The role of planum temporale in processing accent variation in spoken language comprehension.

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a

  7. Trust Levels Definition On Virtual Learning Platforms Through Semantic Languages

    Directory of Open Access Journals (Sweden)

    Carlos E. Montenegro-Marin

    2010-12-01

    Full Text Available Trust level concept is a topic that has opened a knowledge area about the profile evaluation and the people participation in Social Networks. These have presented a high knowledge profit, but at the same time it is necessary to analyze a group of variables to determine the trust participants’ degree.In addition, this is a topic that from some years ago has been presenting a big expectation to settle some alternatives to generate confidence in an activer community on internet. To establish these parameters it is important to define a model to abstract some variables that are involved in this process. For this, it is relevant to take into account the semantic languages as one of the alternatives that allow these kinds of activities. The purpose of this article is to analyze the Trust Levels definition in the contents that are shared on Open Source Virtual learning Platforms through the use of a model of representation of semantic languages. The last ones allow determining the trust in the use of learning objects that are shared in this kind of platforms

  8. The impact of second language learning on semantic and nonsemantic first language reading.

    Science.gov (United States)

    Nosarti, Chiara; Mechelli, Andrea; Green, David W; Price, Cathy J

    2010-02-01

    The relationship between orthography (spelling) and phonology (speech sounds) varies across alphabetic languages. Consequently, learning to read a second alphabetic language, that uses the same letters as the first, increases the phonological associations that can be linked to the same orthographic units. In subjects with English as their first language, previous functional imaging studies have reported increased left ventral prefrontal activation for reading words with spellings that are inconsistent with their orthographic neighbors (e.g., PINT) compared with words that are consistent with their orthographic neighbors (e.g., SHIP). Here, using functional magnetic resonance imaging (fMRI) in 17 Italian-English and 13 English-Italian bilinguals, we demonstrate that left ventral prefrontal activation for first language reading increases with second language vocabulary knowledge. This suggests that learning a second alphabetic language changes the way that words are read in the first alphabetic language. Specifically, first language reading is more reliant on both lexical/semantic and nonlexical processing when new orthographic to phonological mappings are introduced by second language learning. Our observations were in a context that required participants to switch between languages. They motivate future fMRI studies to test whether first language reading is also altered in contexts when the second language is not in use.

  9. Semantic Web Services with Web Ontology Language (OWL-S) - Specification of Agent-Services for DARPA Agent Markup Language (DAML)

    National Research Council Canada - National Science Library

    Sycara, Katia P

    2006-01-01

    CMU did research and development on semantic web services using OWL-S, the semantic web service language under the Defense Advanced Research Projects Agency- DARPA Agent Markup Language (DARPA-DAML) program...

  10. Does it really matter whether students' contributions are spoken versus typed in an intelligent tutoring system with natural language?

    Science.gov (United States)

    D'Mello, Sidney K; Dowell, Nia; Graesser, Arthur

    2011-03-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.

  11. The semantics of English Borrowings in Arabic Media Language: The case of Arab Gulf States Newspapers

    Directory of Open Access Journals (Sweden)

    Anwar A. H. Al-Athwary

    2016-07-01

    Full Text Available The present paper investigates the semantics of English loanwords in Arabic media language (AML. The loanword data are collected from a number of Arab Gulf states newspapers (AGSNs. They  are analyzed semantically from the points of view of semantic change, semantic domains, and the phenomenon of synonymy resulting from lexical borrowing. The semantic analysis has revealed that AML borrowings from English occur in fifteen distinctive semantic domains. Domains that are related to terms of technical and scientific nature are found ranking much higher (9% - 18% than those domains containing nontechnical elements (1% - 8% with the computer and technology category (18% is the most dominant domain. Almost all common mechanisms of semantic change (extension, restriction, amelioration, pejoration, and metaphorical extension are found at work in the context of AML borrowings. The tendency of semantic change in the overwhelming majority of AML borrowings is towards restriction.  Factors like need, semantic similarity, and factors of social and psychological considerations (e.g. prestige, taboo seem to be the potent factors at interplay in semantic change. The first two, i.e. need and semantic similarity, are the most common reasons in most types of semantic change. The problem of synonymy lies in those loanwords that have “Arabic equivalents” in the language. The study claims that this phenomenon could be attributed to the two simultaneous processes of lexical borrowing and?ištiqa:q (the modern efforts of deriving equivalent neologisms.

  12. Natural language acquisition in large scale neural semantic networks

    Science.gov (United States)

    Ealey, Douglas

    This thesis puts forward the view that a purely signal- based approach to natural language processing is both plausible and desirable. By questioning the veracity of symbolic representations of meaning, it argues for a unified, non-symbolic model of knowledge representation that is both biologically plausible and, potentially, highly efficient. Processes to generate a grounded, neural form of this model-dubbed the semantic filter-are discussed. The combined effects of local neural organisation, coincident with perceptual maturation, are used to hypothesise its nature. This theoretical model is then validated in light of a number of fundamental neurological constraints and milestones. The mechanisms of semantic and episodic development that the model predicts are then used to explain linguistic properties, such as propositions and verbs, syntax and scripting. To mimic the growth of locally densely connected structures upon an unbounded neural substrate, a system is developed that can grow arbitrarily large, data- dependant structures composed of individual self- organising neural networks. The maturational nature of the data used results in a structure in which the perception of concepts is refined by the networks, but demarcated by subsequent structure. As a consequence, the overall structure shows significant memory and computational benefits, as predicted by the cognitive and neural models. Furthermore, the localised nature of the neural architecture also avoids the increasing error sensitivity and redundancy of traditional systems as the training domain grows. The semantic and episodic filters have been demonstrated to perform as well, or better, than more specialist networks, whilst using significantly larger vocabularies, more complex sentence forms and more natural corpora.

  13. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (Paudiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    Science.gov (United States)

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  15. The role of planum temporale in processing accent variation in spoken language comprehension

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and

  16. The effects of bilingual language proficiency on recall accuracy and semantic clustering in free recall output: evidence for shared semantic associations across languages.

    Science.gov (United States)

    Francis, Wendy S; Taylor, Randolph S; Gutiérrez, Marisela; Liaño, Mary K; Manzanera, Diana G; Penalver, Renee M

    2018-05-19

    Two experiments investigated how well bilinguals utilise long-standing semantic associations to encode and retrieve semantic clusters in verbal episodic memory. In Experiment 1, Spanish-English bilinguals (N = 128) studied and recalled word and picture sets. Word recall was equivalent in L1 and L2, picture recall was better in L1 than in L2, and the picture superiority effect was stronger in L1 than in L2. Semantic clustering in word and picture recall was equivalent in L1 and L2. In Experiment 2, Spanish-English bilinguals (N = 128) and English-speaking monolinguals (N = 128) studied and recalled word sequences that contained semantically related pairs. Data were analyzed using a multinomial processing tree approach, the pair-clustering model. Cluster formation was more likely for semantically organised than for randomly ordered word sequences. Probabilities of cluster formation, cluster retrieval, and retrieval of unclustered items did not differ across languages or language groups. Language proficiency has little if any impact on the utilisation of long-standing semantic associations, which are language-general.

  17. Phonological processing of rhyme in spoken language and location in sign language by deaf and hearing participants: a neurophysiological study.

    Science.gov (United States)

    Colin, C; Zuinen, T; Bayard, C; Leybaert, J

    2013-06-01

    Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  18. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    Science.gov (United States)

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  19. Thai Language Sentence Similarity Computation Based on Syntactic Structure and Semantic Vector

    Science.gov (United States)

    Wang, Hongbin; Feng, Yinhan; Cheng, Liang

    2018-03-01

    Sentence similarity computation plays an increasingly important role in text mining, Web page retrieval, machine translation, speech recognition and question answering systems. Thai language as a kind of resources scarce language, it is not like Chinese language with HowNet and CiLin resources. So the Thai sentence similarity research faces some challenges. In order to solve this problem of the Thai language sentence similarity computation. This paper proposes a novel method to compute the similarity of Thai language sentence based on syntactic structure and semantic vector. This method firstly uses the Part-of-Speech (POS) dependency to calculate two sentences syntactic structure similarity, and then through the word vector to calculate two sentences semantic similarity. Finally, we combine the two methods to calculate two Thai language sentences similarity. The proposed method not only considers semantic, but also considers the sentence syntactic structure. The experiment result shows that this method in Thai language sentence similarity computation is feasible.

  20. Is spoken Danish less intelligible than Swedish?

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent J.; van Bezooijen, Renee; Pacilly, Jos J. A.

    2010-01-01

    The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss circumstantial evidence suggesting that Danish is

  1. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    Science.gov (United States)

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  2. Generation of Signs within Semantic and Phonological Categories: Data from Deaf Adults and Children Who Use American Sign Language

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.; Figueroa, Daileen M.

    2017-01-01

    Two key areas of language development include semantic and phonological knowledge. Semantic knowledge relates to word and concept knowledge. Phonological knowledge relates to how language parameters combine to create meaning. We investigated signing deaf adults' and children's semantic and phonological sign generation via one-minute tasks,…

  3. Episodic grammar: a computational model of the interaction between episodic and semantic memory in language processing

    NARCIS (Netherlands)

    Borensztajn, G.; Zuidema, W.; Carlson, L.; Hoelscher, C.; Shipley, T.F.

    2011-01-01

    We present a model of the interaction of semantic and episodic memory in language processing. Our work shows how language processing can be understood in terms of memory retrieval. We point out that the perceived dichotomy between rule-based versus exemplar-based language modelling can be

  4. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  5. Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part I: Denotational Semantics, Natural Semantics, and Abstract Machines

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2009-01-01

    We derive two big-step abstract machines, a natural semantics, and the valuation function of a denotational semantics based on the small-step abstract machine for Core Scheme presented by Clinger at PLDI'98. Starting from a functional implementation of this small-step abstract machine, (1) we fus...

  6. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  7. WEATHER FORECAST DATA SEMANTIC ANALYSIS IN F-LOGIC

    Directory of Open Access Journals (Sweden)

    Ana Meštrović

    2007-06-01

    Full Text Available This paper addresses the semantic analysis problem in a spoken dialog system developed for the domain of weather forecasts. The main goal of semantic analysis is to extract the meaning from the spoken utterances and to transform it into a domain database format. In this work a semantic database for the domain of weather forecasts is represented using the F-logic formalism. Semantic knowledge is captured through semantic categories a semantic dictionary using phrases and output templates. Procedures for semantic analysis of Croatian weather data combine parsing techniques for Croatian language and slot filling approach. Semantic analysis is conducted in three phases. In the first phase the main semantic category for the input utterance is determined. The lattices are used for hierarchical semantic relation representation and main category derivation. In the second phase semantic units are analyzed and knowledge slots in the database are filled. Since some slot values of input data are missing in the third phase, incomplete data is updated with missing values. All rules for semantic analysis are defined in the F-logic and implemented using the FLORA-2 system. The results of semantic analysis evaluation in terms of frame and slot error rates are presented.

  8. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    Science.gov (United States)

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  9. Self-Ratings of Spoken Language Dominance: A Multilingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals

    Science.gov (United States)

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2012-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…

  10. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  11. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  12. An adaptive semantic matching paradigm for reliable and valid language mapping in individuals with aphasia.

    Science.gov (United States)

    Wilson, Stephen M; Yen, Melodie; Eriksson, Dana K

    2018-04-17

    Research on neuroplasticity in recovery from aphasia depends on the ability to identify language areas of the brain in individuals with aphasia. However, tasks commonly used to engage language processing in people with aphasia, such as narrative comprehension and picture naming, are limited in terms of reliability (test-retest reproducibility) and validity (identification of language regions, and not other regions). On the other hand, paradigms such as semantic decision that are effective in identifying language regions in people without aphasia can be prohibitively challenging for people with aphasia. This paper describes a new semantic matching paradigm that uses an adaptive staircase procedure to present individuals with stimuli that are challenging yet within their competence, so that language processing can be fully engaged in people with and without language impairments. The feasibility, reliability and validity of the adaptive semantic matching paradigm were investigated in sixteen individuals with chronic post-stroke aphasia and fourteen neurologically normal participants, in comparison to narrative comprehension and picture naming paradigms. All participants succeeded in learning and performing the semantic paradigm. Test-retest reproducibility of the semantic paradigm in people with aphasia was good (Dice coefficient = 0.66), and was superior to the other two paradigms. The semantic paradigm revealed known features of typical language organization (lateralization; frontal and temporal regions) more consistently in neurologically normal individuals than the other two paradigms, constituting evidence for validity. In sum, the adaptive semantic matching paradigm is a feasible, reliable and valid method for mapping language regions in people with aphasia. © 2018 Wiley Periodicals, Inc.

  13. "Now We Have Spoken."

    Science.gov (United States)

    Zimmer, Patricia Moore

    2001-01-01

    Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…

  14. Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part I: Denotational Semantics, Natural Semantics, and Abstract Machines

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2008-01-01

    We derive two big-step abstract machines, a natural semantics, and the valuation function of a denotational semantics based on the small-step abstract machine for Core Scheme presented by Clinger at PLDI'98. Starting from a functional implementation of this small-step abstract machine, (1) we fuse...... its transition function with its driver loop, obtaining the functional implementation of a big-step abstract machine; (2) we adjust this big-step abstract machine so that it is in defunctionalized form, obtaining the functional implementation of a second big-step abstract machine; (3) we...... refunctionalize this adjusted abstract machine, obtaining the functional implementation of a natural semantics in continuation style; and (4) we closure-unconvert this natural semantics, obtaining a compositional continuation-passing evaluation function which we identify as the functional implementation...

  15. Quarterly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits for fiscal...

  16. Quarterly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from fiscal...

  17. Effects of Iconicity and Semantic Relatedness on Lexical Access in American Sign Language

    Science.gov (United States)

    Bosworth, Rain G.; Emmorey, Karen

    2010-01-01

    Iconicity is a property that pervades the lexicon of many sign languages, including American Sign Language (ASL). Iconic signs exhibit a motivated, nonarbitrary mapping between the form of the sign and its meaning. We investigated whether iconicity enhances semantic priming effects for ASL and whether iconic signs are recognized more quickly than…

  18. False belief and semantic language development in children aged 2 to 4 years

    Directory of Open Access Journals (Sweden)

    Milton Eduardo Bermúdez-Jaimes

    2010-02-01

    Full Text Available We intended to explore and characterize the relationships between the development of understanding childhood theories of mind and the semantic development of language. We used three versions of the false belief task,programmed with Flash, and the Early Language Development Battery in order to assess semantic abilities in 116 children aged two to four years. Significant differences among ages were found for task performance, and positive associations between social comprehension and language development were found in two tasks. Results were interpreted through the interaction proposal by Wellman (1994.

  19. Formal semantic specifications as implementation blueprints for real-time programming languages

    Science.gov (United States)

    Feyock, S.

    1981-01-01

    Formal definitions of language and system semantics provide highly desirable checks on the correctness of implementations of programming languages and their runtime support systems. If these definitions can give concrete guidance to the implementor, major increases in implementation accuracy and decreases in implementation effort can be achieved. It is shown that of the wide variety of available methods the Hgraph (hypergraph) definitional technique (Pratt, 1975), is best suited to serve as such an implementation blueprint. A discussion and example of the Hgraph technique is presented, as well as an overview of the growing body of implementation experience of real-time languages based on Hgraph semantic definitions.

  20. Task-Oriented Spoken Dialog System for Second-Language Learning

    Science.gov (United States)

    Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…

  1. Web-based mini-games for language learning that support spoken interaction

    CSIR Research Space (South Africa)

    Strik, H

    2015-09-01

    Full Text Available The European ‘Lifelong Learning Programme’ (LLP) project ‘Games Online for Basic Language learning’ (GOBL) aimed to provide youths and adults wishing to improve their basic language skills access to materials for the development of communicative...

  2. Propositional Density in Spoken and Written Language of Czech-Speaking Patients with Mild Cognitive Impairment

    Science.gov (United States)

    Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán

    2016-01-01

    Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…

  3. ONTOLOGY BASED MEANINGFUL SEARCH USING SEMANTIC WEB AND NATURAL LANGUAGE PROCESSING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    K. Palaniammal

    2013-10-01

    Full Text Available The semantic web extends the current World Wide Web by adding facilities for the machine understood description of meaning. The ontology based search model is used to enhance efficiency and accuracy of information retrieval. Ontology is the core technology for the semantic web and this mechanism for representing formal and shared domain descriptions. In this paper, we proposed ontology based meaningful search using semantic web and Natural Language Processing (NLP techniques in the educational domain. First we build the educational ontology then we present the semantic search system. The search model consisting three parts which are embedding spell-check, finding synonyms using WordNet API and querying ontology using SPARQL language. The results are both sensitive to spell check and synonymous context. This paper provides more accurate results and the complete details for the selected field in a single page.

  4. Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

    Science.gov (United States)

    Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R

    This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant

  5. Bilateral Versus Unilateral Cochlear Implants in Children: A Study of Spoken Language Outcomes

    Science.gov (United States)

    Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of

  6. Bilateral versus unilateral cochlear implants in children: a study of spoken language outcomes.

    Science.gov (United States)

    Sarant, Julia; Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children's intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of screen time, and more time spent

  7. Retrieving Semantic and Syntactic Word Properties: ERP Studies on the Time Course in Language Comprehension

    OpenAIRE

    Müller, O.

    2006-01-01

    The present doctoral thesis investigates the temporal characteristics of the retrieval of semantic and syntactic word properties in language comprehension. In particular, an attempt is made to assess the retrieval order of semantic category and grammatical gender information, using the lateralized readiness potential and the inhibition-related N2 effect. Chapter 1 contains a general introduction. Chapter 2 reports an experiment that employs the two-choice go/nogo task in combination with EEG ...

  8. Training Of Manual Actions Improves Language Understanding of Semantically-Related Action Sentences

    Directory of Open Access Journals (Sweden)

    Matteo eLocatelli

    2012-12-01

    Full Text Available Conceptual knowledge accessed by language may involve the re-activation of the associated primary sensory-motor processes. Whether these embodied representations are indeed constitutive to conceptual knowledge is hotly debated, particularly since direct evidence that sensory-motor expertise can improve conceptual processing is scarce.In this study, we sought for this crucial piece of evidence, by training naive healthy subjects to perform complex manual actions and by measuring, before and after training, their performance in a semantic language task. 19 participants engaged in 3 weeks of motor training. Each participant was trained in 3 complex manual actions (e.g. origami. Before and after the training period, each subject underwent a series of manual dexterity tests and a semantic language task. The latter consisted of a sentence-picture semantic congruency judgment task, with 6 target congruent sentence-picture pairs (semantically related to the trained manual actions, 6 non-target congruent pairs (semantically unrelated, and 12 filler incongruent pairs.Manual action training induced a significant improvement in all manual dexterity tests, demonstrating the successful acquisition of sensory-motor expertise. In the semantic language task, the reaction times to both target and non-target congruent sentence-image pairs decreased after action training, indicating a more efficient conceptual-semantic processing. Noteworthy, the reaction times for target pairs decreased more than those for non-target pairs, as indicated by the 2x2 interaction. These results were confirmed when controlling for the potential bias of increased frequency of use of target lexical items during manual training.The results of the present study suggest that sensory-motor expertise gained by training of specific manual actions can lead to an improvement of cognitive-linguistic skills related to the specific conceptual-semantic domain associated to the trained actions.

  9. Yearly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2016 Onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from federal...

  10. Yearly Data for Spoken Language Preferences of Supplemental Security Income (Blind & Disabled) (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...

  11. Yearly Data for Spoken Language Preferences of Supplemental Security Income Aged Applicants (2011-Onward)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits from federal fiscal year 2011...

  12. Yearly Data for Spoken Language Preferences of Social Security Disability Insurance Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...

  13. Yearly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...

  14. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Aged Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits for fiscal years 2014 -...

  15. Quarterly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for fiscal years 2014...

  16. Yearly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2016 Onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...

  17. Effects of Semantic Context and Fundamental Frequency Contours on Mandarin Speech Recognition by Second Language Learners.

    Science.gov (United States)

    Zhang, Linjun; Li, Yu; Wu, Han; Li, Xin; Shu, Hua; Zhang, Yang; Li, Ping

    2016-01-01

    Speech recognition by second language (L2) learners in optimal and suboptimal conditions has been examined extensively with English as the target language in most previous studies. This study extended existing experimental protocols (Wang et al., 2013) to investigate Mandarin speech recognition by Japanese learners of Mandarin at two different levels (elementary vs. intermediate) of proficiency. The overall results showed that in addition to L2 proficiency, semantic context, F0 contours, and listening condition all affected the recognition performance on the Mandarin sentences. However, the effects of semantic context and F0 contours on L2 speech recognition diverged to some extent. Specifically, there was significant modulation effect of listening condition on semantic context, indicating that L2 learners made use of semantic context less efficiently in the interfering background than in quiet. In contrast, no significant modulation effect of listening condition on F0 contours was found. Furthermore, there was significant interaction between semantic context and F0 contours, indicating that semantic context becomes more important for L2 speech recognition when F0 information is degraded. None of these effects were found to be modulated by L2 proficiency. The discrepancy in the effects of semantic context and F0 contours on L2 speech recognition in the interfering background might be related to differences in processing capacities required by the two types of information in adverse listening conditions.

  18. Semantic markup of nouns and adjectives for the Electronic corpus of texts in Tuvan language

    Directory of Open Access Journals (Sweden)

    Bajlak Ch. Oorzhak

    2016-12-01

    Full Text Available The article examines the progress of semantic markup of the Electronic corpus of texts in Tuvan language (ECTTL, which is another stage of adding Tuvan texts to the database and marking up the corpus. ECTTL is a collaborative project by researchers from Tuvan State University (Research and Education Center of Turkic Studies and Department of Information Technologies. Semantic markup of Tuvan lexis will come as a search engine and reference system which will help users find text snippets containing words with desired meanings in ECTTL. The first stage of this process is setting up databases of basic lexemes of Tuvan language. All meaningful lexemes were classified into the following semantic groups: humans, animals, objects, natural objects and phenomena, and abstract concepts. All Tuvan object nouns, as well as both descriptive and relative adjectives, were assigned to one of these lexico-semantic classes. Each class, sub-class and descriptor is tagged in Tuvan, Russian and English; these tags, in turn, will help automatize searching. The databases of meaningful lexemes of Tuvan language will also outline their lexical combinations. The automatized system will contain information on semantic combinations of adjectives with nouns, adverbs with verbs, nouns with verbs, as well as on the combinations which are semantically incompatible.

  19. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    Science.gov (United States)

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  20. Case report: acquisition of three spoken languages by a child with a cochlear implant.

    Science.gov (United States)

    Francis, Alexander L; Ho, Diana Wai Lam

    2003-03-01

    There have been only two reports of multilingual cochlear implant users to date, and both of these were postlingually deafened adults. Here we report the case of a 6-year-old early-deafened child who is acquiring Cantonese, English and Mandarin in Hong Kong. He and two age-matched peers with similar educational backgrounds were tested using common, standardized tests of vocabulary and expressive and receptive language skills (Peabody Picture Vocabulary Test (Revised) and Reynell Developmental Language Scales version II). Results show that this child is acquiring Cantonese, English and Mandarin to a degree comparable to two classmates with normal hearing and similar educational and social backgrounds.

  1. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    Science.gov (United States)

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  2. Cross-Sensory Correspondences and Symbolism in Spoken and Written Language

    Science.gov (United States)

    Walker, Peter

    2016-01-01

    Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…

  3. Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study.

    Science.gov (United States)

    Werfel, Krystal L

    2017-10-05

    The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

  4. Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension

    Directory of Open Access Journals (Sweden)

    Brian eRiordan

    2015-05-01

    Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.

  5. EVALUATION OF SEMANTIC SIMILARITY FOR SENTENCES IN NATURAL LANGUAGE BY MATHEMATICAL STATISTICS METHODS

    Directory of Open Access Journals (Sweden)

    A. E. Pismak

    2016-03-01

    Full Text Available Subject of Research. The paper is focused on Wiktionary articles structural organization in the aspect of its usage as the base for semantic network. Wiktionary community references, article templates and articles markup features are analyzed. The problem of numerical estimation for semantic similarity of structural elements in Wiktionary articles is considered. Analysis of existing software for semantic similarity estimation of such elements is carried out; algorithms of their functioning are studied; their advantages and disadvantages are shown. Methods. Mathematical statistics methods were used to analyze Wiktionary articles markup features. The method of semantic similarity computing based on statistics data for compared structural elements was proposed.Main Results. We have concluded that there is no possibility for direct use of Wiktionary articles as the source for semantic network. We have proposed to find hidden similarity between article elements, and for that purpose we have developed the algorithm for calculation of confidence coefficients proving that each pair of sentences is semantically near. The research of quantitative and qualitative characteristics for the developed algorithm has shown its major performance advantage over the other existing solutions in the presence of insignificantly higher error rate. Practical Relevance. The resulting algorithm may be useful in developing tools for automatic Wiktionary articles parsing. The developed method could be used in computing of semantic similarity for short text fragments in natural language in case of algorithm performance requirements are higher than its accuracy specifications.

  6. Merleau-Ponty's Phenomenology of Language and General Semantics.

    Science.gov (United States)

    Lapointe, Francois H.

    A survey of Maurice Merleau-Ponty's views on the phenomenology of language yields insight into the basic semiotic nature of language. Merleau-ponty's conceptions stand in opposition to Saussure's linguistic postulations and Korzybski's scientism. That is, if language is studied phenomenologically, the acts of speech and gesture take on greater…

  7. Semantic processing skills of Grade 1 English language learners in ...

    African Journals Online (AJOL)

    This paper reports on part of the first phase of a longitudinal project investigating the development of academic language in English as the Language of Teaching and Learning (LoLT) by Foundation phase learners in two different educational contexts. In the first context, the learners were all English additional language ...

  8. Iconic Factors and Language Word Order

    Science.gov (United States)

    Moeser, Shannon Dawn

    1975-01-01

    College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)

  9. A Metadata Model for E-Learning Coordination through Semantic Web Languages

    Science.gov (United States)

    Elci, Atilla

    2005-01-01

    This paper reports on a study aiming to develop a metadata model for e-learning coordination based on semantic web languages. A survey of e-learning modes are done initially in order to identify content such as phases, activities, data schema, rules and relations, etc. relevant for a coordination model. In this respect, the study looks into the…

  10. Semantic abilities in children with pragmatic language impairment: the case of picture naming skills

    NARCIS (Netherlands)

    Ketelaars, M.P.; Hermans, S.I.A.; Cuperus, J.; Jansonius, K.; Verhoeven, L.

    2011-01-01

    Purpose: The semantic abilities of children with pragmatic language impairment (PLI) are subject to debate. The authors investigated picture naming and definition skills in 5-year-olds with PLI in comparison to typically developing children. Method: 84 children with PLI and 80 age-matched typically

  11. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  12. Human inferior colliculus activity relates to individual differences in spoken language learning.

    Science.gov (United States)

    Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M

    2012-03-01

    A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.

  13. SNOT-22: psychometric properties and cross-cultural adaptation into the Portuguese language spoken in Brazil.

    Science.gov (United States)

    Caminha, Guilherme Pilla; Melo Junior, José Tavares de; Hopkins, Claire; Pizzichini, Emilio; Pizzichini, Marcia Margaret Menezes

    2012-12-01

    Rhinosinusitis is a highly prevalent disease and a major cause of high medical costs. It has been proven to have an impact on the quality of life through generic health-related quality of life assessments. However, generic instruments may not be able to factor in the effects of interventions and treatments. SNOT-22 is a major disease-specific instrument to assess quality of life for patients with rhinosinusitis. Nevertheless, there is still no validated SNOT-22 version in our country. Cross-cultural adaptation of the SNOT-22 into Brazilian Portuguese and assessment of its psychometric properties. The Brazilian version of the SNOT-22 was developed according to international guidelines and was broken down into nine stages: 1) Preparation 2) Translation 3) Reconciliation 4) Back-translation 5) Comparison 6) Evaluation by the author of the SNOT-22 7) Revision by committee of experts 8) Cognitive debriefing 9) Final version. Second phase: prospective study consisting of a verification of the psychometric properties, by analyzing internal consistency and test-retest reliability. Cultural adaptation showed adequate understanding, acceptability and psychometric properties. We followed the recommended steps for the cultural adaptation of the SNOT-22 into Portuguese language, producing a tool for the assessment of patients with sinonasal disorders of clinical importance and for scientific studies.

  14. The Cost of Switching Language in a Semantic Categorization Task.

    Science.gov (United States)

    von Studnitz, Roswitha E.; Green, David W.

    2002-01-01

    Presents a study in which German-English bilinguals decided whether a visually presented word, either German or English, referred to an animate or to an inanimate entity. Bilinguals were slower to respond on a language switch trial than on language non-switch trials but only if they had to make the same response as on the prior trial. (Author/VWL)

  15. Assessing the Language of the Jos Crises: Syntactico-Semantic ...

    African Journals Online (AJOL)

    Language signals diverse kinds of meaning in interpersonal and social relationships: it could express distance, exclusion, and alienation instead of friendship, inclusion and rapport. As a ready tool which can be manipulated to accommodate different communication needs, language is invaluable in dictating the dominant ...

  16. The semantics of Chemical Markup Language (CML): dictionaries and conventions

    Science.gov (United States)

    2011-01-01

    The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs. PMID:21999509

  17. The semantics of Chemical Markup Language (CML): dictionaries and conventions.

    Science.gov (United States)

    Murray-Rust, Peter; Townsend, Joe A; Adams, Sam E; Phadungsukanan, Weerapong; Thomas, Jens

    2011-10-14

    The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs.

  18. FUNCTIONAL AND SEMANTIC PROPERTIES OF LOANWORDS IN THE RUSSIAN LANGUAGE (BASED ON HYPERTEXTS

    Directory of Open Access Journals (Sweden)

    Novikov Vladimir Borisovich

    2015-06-01

    Full Text Available The author studies functional and semantic properties of foreign-language nouns revealed in the form of the oral written language in computer-mediated communication, taking into account the debatability of issues about the borders of a loanword's notion, about the reasons of penetration of foreign-language words into the Russian language and classification of loanwords, which are used in linguistic literature. The actual material (500 foreign-language nouns was selected by the method of continuous sampling of the online texts posted in social networks, news portals and various forums. It is established that the loanwords used in hypertexts reflect the updating of lexical means by generating the words that refer to the new and current phenomena; penetrate into the Russian language along with the borrowing of thing or notion; generate parallels to the existing names (at this, the ability of forming doublet reflection is eliminated by means of semantic and stylistic differentiation of units – a borrowed one and an existing in the language of the recipient. The analysis of lexical content of loanwords revealed that the most numerous LSG are Technology LSG that unites the names of technical devices; Art and Evaluation LSGs. It is proved in the article that foreign-language nouns are used in hypertexts for communicative, nominative, emotive, and metalinguistic functions. However, such lexemes do not participate in the implementation of regulatory and phatic functions.

  19. Conceptual representation of verbs in bilinguals: semantic field effects and a second-language performance paradox.

    Science.gov (United States)

    Segalowitz, Norman; de Almeida, Roberto G

    2002-01-01

    It is well known that bilinguals perform better in their first language (L1) than in their second lanaguage (L2) in a wide range of linguistic tasks. In recent studies, however, the authors have found that bilingual participants can demonstrate faster response times to L1 stimuli than to L2 stimuli in one classification task and the reverse in a different classification task. In the current study, they investigated the reasons for this "L2-better-than-L1" effect. English-French bilinguals performed one word relatedness and two categorization tasks with verbs of motion (e.g., run) and psychological verbs (e.g., admire) in both languages. In the word relatedness task, participants judged how closely related pairs of verbs from both categories were. In a speeded semantic categorization task, participants classified the verbs according to their semantic category (psychological or motion). In an arbitrary classification task, participants had to learn how verbs had been assigned to two arbitrary categories. Participants performed better in L1 in the semantic classification task but paradoxically better in L2 in the arbitrary classification task. To account for these effects, the authors used the ratings from the word relatedness task to plot three-dimensional "semantic fields" for the verbs. Cross-language field differences were found to be significantly related to the paradoxical performance and to fluency levels. The results have implications for understanding of how bilinguals represent verbs in the mental lexicon. Copyright 2002 Elsevier Science (USA).

  20. A grammar-based semantic similarity algorithm for natural language sentences.

    Science.gov (United States)

    Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  1. Teaching natural language to computers

    OpenAIRE

    Corneli, Joseph; Corneli, Miriam

    2016-01-01

    "Natural Language," whether spoken and attended to by humans, or processed and generated by computers, requires networked structures that reflect creative processes in semantic, syntactic, phonetic, linguistic, social, emotional, and cultural modules. Being able to produce novel and useful behavior following repeated practice gets to the root of both artificial intelligence and human language. This paper investigates the modalities involved in language-like applications that computers -- and ...

  2. The semantic associative ability in preschoolers with different age of language onset

    Directory of Open Access Journals (Sweden)

    Dina Di Giacomo

    2016-07-01

    Full Text Available Aim of the study is to verify the semantic associative abilities in children with different language onset times: early, typical, and delayed talkers. The study was conducted on the sample of 74 preschool children who performed a Perceptual Associative Task, in order to evaluate the ability to link concepts by four associative strategies (function, part/whole, contiguity, and superordinate strategies. The results evidenced that the children with delayed language onset performed significantly better than the children with early language production. No difference was found between typical and delayed language groups. Our results showed that the children with early language onset presented weakness in the flexibility of elaboration of the concepts. The typical and delayed language onset groups overlapped performance in the associative abilities. The time of language onset appeared to be a predictive factor in the use of semantic associative strategies; the early talkers might present a slow pattern of conceptual processing, whereas the typical and late talkers may have protective factors.

  3. A chemical specialty semantic network for the Unified Medical Language System

    Directory of Open Access Journals (Sweden)

    Morrey C

    2012-05-01

    Full Text Available Abstract Background Terms representing chemical concepts found the Unified Medical Language System (UMLS are used to derive an expanded semantic network with mutually exclusive semantic types. The UMLS Semantic Network (SN is composed of a collection of broad categories called semantic types (STs that are assigned to concepts. Within the UMLS’s coverage of the chemical domain, we find a great deal of concepts being assigned more than one ST. This leads to the situation where the extent of a given ST may contain concepts elaborating variegated semantics. A methodology for expanding the chemical subhierarchy of the SN into a finer-grained categorization of mutually exclusive types with semantically uniform extents is presented. We call this network a Chemical Specialty Semantic Network (CSSN. A CSSN is derived automatically from the existing chemical STs and their assignments. The methodology incorporates a threshold value governing the minimum size of a type’s extent needed for inclusion in the CSSN. Thus, different CSSNs can be created by choosing different threshold values based on varying requirements. Results A complete CSSN is derived using a threshold value of 300 and having 68 STs. It is used effectively to provide high-level categorizations for a random sample of compounds from the “Chemical Entities of Biological Interest” (ChEBI ontology. The effect on the size of the CSSN using various threshold parameter values between one and 500 is shown. Conclusions The methodology has several potential applications, including its use to derive a pre-coordinated guide for ST assignments to new UMLS chemical concepts, as a tool for auditing existing concepts, inter-terminology mapping, and to serve as an upper-level network for ChEBI.

  4. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    Directory of Open Access Journals (Sweden)

    Ming Che Lee

    2014-01-01

    Full Text Available This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  5. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    Science.gov (United States)

    Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952

  6. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  7. Computing an Ontological Semantics for a Natural Language Fragment

    DEFF Research Database (Denmark)

    Szymczak, Bartlomiej Antoni

    tried to establish a domain independent “ontological semantics” for relevant fragments of natural language. The purpose of this research is to develop methods and systems for taking advantage of formal ontologies for the purpose of extracting the meaning contents of texts. This functionality...

  8. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    Science.gov (United States)

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  9. Quantization, Frobenius and Bi algebras from the Categorical Framework of Quantum Mechanics to Natural Language Semantics

    Science.gov (United States)

    Sadrzadeh, Mehrnoosh

    2017-07-01

    Compact Closed categories and Frobenius and Bi algebras have been applied to model and reason about Quantum protocols. The same constructions have also been applied to reason about natural language semantics under the name: ``categorical distributional compositional'' semantics, or in short, the ``DisCoCat'' model. This model combines the statistical vector models of word meaning with the compositional models of grammatical structure. It has been applied to natural language tasks such as disambiguation, paraphrasing and entailment of phrases and sentences. The passage from the grammatical structure to vectors is provided by a functor, similar to the Quantization functor of Quantum Field Theory. The original DisCoCat model only used compact closed categories. Later, Frobenius algebras were added to it to model long distance dependancies such as relative pronouns. Recently, bialgebras have been added to the pack to reason about quantifiers. This paper reviews these constructions and their application to natural language semantics. We go over the theory and present some of the core experimental results.

  10. Quantization, Frobenius and Bi Algebras from the Categorical Framework of Quantum Mechanics to Natural Language Semantics

    Directory of Open Access Journals (Sweden)

    Mehrnoosh Sadrzadeh

    2017-07-01

    Full Text Available Compact Closed categories and Frobenius and Bi algebras have been applied to model and reason about Quantum protocols. The same constructions have also been applied to reason about natural language semantics under the name: “categorical distributional compositional” semantics, or in short, the “DisCoCat” model. This model combines the statistical vector models of word meaning with the compositional models of grammatical structure. It has been applied to natural language tasks such as disambiguation, paraphrasing and entailment of phrases and sentences. The passage from the grammatical structure to vectors is provided by a functor, similar to the Quantization functor of Quantum Field Theory. The original DisCoCat model only used compact closed categories. Later, Frobenius algebras were added to it to model long distance dependancies such as relative pronouns. Recently, bialgebras have been added to the pack to reason about quantifiers. This paper reviews these constructions and their application to natural language semantics. We go over the theory and present some of the core experimental results.

  11. Coupling ontology driven semantic representation with multilingual natural language generation for tuning international terminologies.

    Science.gov (United States)

    Rassinoux, Anne-Marie; Baud, Robert H; Rodrigues, Jean-Marie; Lovis, Christian; Geissbühler, Antoine

    2007-01-01

    The importance of clinical communication between providers, consumers and others, as well as the requisite for computer interoperability, strengthens the need for sharing common accepted terminologies. Under the directives of the World Health Organization (WHO), an approach is currently being conducted in Australia to adopt a standardized terminology for medical procedures that is intended to become an international reference. In order to achieve such a standard, a collaborative approach is adopted, in line with the successful experiment conducted for the development of the new French coding system CCAM. Different coding centres are involved in setting up a semantic representation of each term using a formal ontological structure expressed through a logic-based representation language. From this language-independent representation, multilingual natural language generation (NLG) is performed to produce noun phrases in various languages that are further compared for consistency with the original terms. Outcomes are presented for the assessment of the International Classification of Health Interventions (ICHI) and its translation into Portuguese. The initial results clearly emphasize the feasibility and cost-effectiveness of the proposed method for handling both a different classification and an additional language. NLG tools, based on ontology driven semantic representation, facilitate the discovery of ambiguous and inconsistent terms, and, as such, should be promoted for establishing coherent international terminologies.

  12. The interplay between mood and language comprehension: evidence from P600 to semantic reversal anomalies.

    Science.gov (United States)

    Vissers, Constance Th W M; Chwilla, Uli G; Egger, Jos I M; Chwilla, Dorothee J

    2013-05-01

    Little is known about the relationship between language and emotion. Vissers et al. (2010) investigated the effects of mood on the processing of syntactic violations, as indexed by P600. An interaction was observed between mood and syntactic correctness for which three explanations were offered: one in terms of syntactic processing, one in terms of heuristic processing, and one in terms of more general factors like attention and/or motivation. In this experiment, we further determined the locus of the effects of emotional state on language comprehension by investigating the effects of mood on the processing of semantic reversal anomalies (e.g., "the cat that fled from the mice"), in which heuristics play a key role. The main findings were as follows. The mood induction was effective: participants were happier after watching happy film clips and sadder after watching sad film clips compared to baseline. For P600, a mood by semantic plausibility interaction was obtained reflecting a broadly distributed P600 effect for the happy mood vs. absence of a P600 for the sad mood condition. Correlation analyses confirmed that changes in P600 in happy mood were accompanied by changes in emotional state. Given that semantic reversal anomalies are syntactically unambiguous, the P600 modulation by mood cannot be explained by syntactic factors. The semantic plausibility by mood interaction can be accounted for in terms of (1) heuristic processing (stronger reliance on a good enough representation of the input in happy mood than sad mood), and/or (2) more general factors like attention (e.g., more attention to semantic reversals in happy mood than sad mood). Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Confusing similar words: ERP correlates of lexical-semantic processing in first language attrition and late second language acquisition.

    Science.gov (United States)

    Kasparian, Kristina; Steinhauer, Karsten

    2016-12-01

    First language (L1) attrition is a socio-linguistic circumstance where second language (L2) learning coincides with changes in exposure and use of the native-L1. Attriters often report experiencing a decline in automaticity or proficiency in their L1 after a prolonged period in the L2 environment, while their L2 proficiency continues to strengthen. Investigating the neurocognitive correlates of attrition alongside those of late L2 acquisition addresses the question of whether the brain mechanisms underlying both L1 and L2 processing are strongly determined by proficiency, irrespective of whether the language was acquired from birth or in adulthood. Using event-related-potentials (ERPs), we examined lexical-semantic processing in Italian L1 attriters, compared to adult Italian L2 learners and to Italian monolingual native speakers. We contrasted the processing of classical lexical-semantic violations (Mismatch condition) with sentences that were equally semantically implausible but arguably trickier, as the target-noun was "swapped" with an orthographic neighbor that differed only in its final vowel and gender-marking morpheme (e.g., cappello (hat) vs. cappella (chapel)). Our aim was to determine whether sentences with such "confusable nouns" (Swap condition) would be processed as semantically correct by late L2 learners and L1 attriters, especially for those individuals with lower Italian proficiency scores. We found that lower-proficiency Italian speakers did not show significant N400 effects for Swap violations relative to correct sentences, regardless of whether Italian was the L1 or the L2. Crucially, N400 response profiles followed a continuum of "nativelikeness" predicted by Italian proficiency scores - high-proficiency attriters and high-proficiency Italian learners were indistinguishable from native controls, whereas attriters and L2 learners in the lower-proficiency range showed significantly reduced N400 effects for "Swap" errors. Importantly, attriters

  14. How does language cut a big semantic cake?

    Directory of Open Access Journals (Sweden)

    Dilparić Branislava

    2007-01-01

    Full Text Available The act of categorization is undertaken every time we use a word to refer to two or more different entities. Although different, these entities are regarded as the same. Yet the seeing of sameness in differences raises deep philosophical problems and leads to different conclusions on the role of language in this cognitive process. The paper gives a short overview of these as well as of the fundamental principles of the prototype theory of categorization, which seriously challenged the foundations of the classical theory, dominant in linguistics for a long time, through extensive experimental research in the second part of the twentieth century and pointed to the need for a non-Aristotelian theory of categorization.

  15. Multimodal semantic quantity representations: further evidence from Korean Sign Language

    Directory of Open Access Journals (Sweden)

    Frank eDomahs

    2012-01-01

    Full Text Available Korean deaf signers performed a number comparison task on pairs of Arabic digits. In their RT profiles, the expected magnitude effect was systematically modified by properties of number signs in Korean Sign Language in a culture-specific way (not observed in hearing and deaf Germans or hearing Chinese. We conclude that finger-based quantity representations are automatically activated even in simple tasks with symbolic input although this may be irrelevant and even detrimental for task performance. These finger-based numerical representations are accessed in addition to another, more basic quantity system which is evidenced by the magnitude effect. In sum, these results are inconsistent with models assuming only one single amodal representation of numerical quantity.

  16. How Does the Linguistic Distance Between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances During Verbal Memory Examination.

    Science.gov (United States)

    Taha, Haitham

    2017-06-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and phonologically similar version (PS). The result showed that for immediate free-recall, the performances were better for the SL and the PS conditions compared to the SA one. However, for the parts of delayed recall and recognition, the results did not reveal any significant consistent effect of diglossia. Accordingly, it was suggested that diglossia has a significant effect on the storage and short term memory functions but not on long term memory functions. The results were discussed in light of different approaches in the field of bilingual memory.

  17. Higher Language Ability is Related to Angular Gyrus Activation Increase During Semantic Processing, Independent of Sentence Incongruency

    Science.gov (United States)

    Van Ettinger-Veenstra, Helene; McAllister, Anita; Lundberg, Peter; Karlsson, Thomas; Engström, Maria

    2016-01-01

    This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task—which tapped language comprehension and inference, and modulated sentence congruency—employing functional magnetic resonance imaging (fMRI). We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, there was a significant increase of activation in the inferior frontal gyrus (IFG) bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing, is opposed to what the neural efficiency hypothesis would predict. We can conclude that no evidence is found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation. PMID

  18. Higher language ability is related to angular gyrus activation increase during semantic processing, independent of sentence incongruency

    Directory of Open Access Journals (Sweden)

    Helene eVan Ettinger-Veenstra

    2016-03-01

    Full Text Available This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task - which tapped language comprehension and inference, and modulated sentence congruency - employing functional magnetic resonance imaging. We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, a significant increase of activation in the inferior frontal gyrus bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing is opposed to what the neural efficiency hypothesis would predict. We can conclude that there is no evidence found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation.

  19. Evaluating Attributions of Delay and Confusion in Young Bilinguals: Special Insights from Infants Acquiring a Signed and a Spoken Language.

    Science.gov (United States)

    Petitto, Laura Ann; Holowka, Siobhan

    2002-01-01

    Examines whether early simultaneous bilingual language exposure causes children to be language delayed or confused. Cites research suggesting normal and parallel linguistic development occurs in each language in young children and young children's dual language developments are similar to monolingual language acquisition. Research on simultaneous…

  20. A semantic-based approach for querying linked data using natural language

    KAUST Repository

    Paredes-Valverde, Mario Andrés

    2016-01-11

    The semantic Web aims to provide to Web information with a well-defined meaning and make it understandable not only by humans but also by computers, thus allowing the automation, integration and reuse of high-quality information across different applications. However, current information retrieval mechanisms for semantic knowledge bases are intended to be only used by expert users. In this work, we propose a natural language interface that allows non-expert users the access to this kind of information through formulating queries in natural language. The present approach uses a domain-independent ontology model to represent the question\\'s structure and context. Also, this model allows determination of the answer type expected by the user based on a proposed question classification. To prove the effectiveness of our approach, we have conducted an evaluation in the music domain using LinkedBrainz, an effort to provide the MusicBrainz information as structured data on the Web by means of Semantic Web technologies. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.74 to 0.82 for a corpus of questions generated by a group of real-world end users. © The Author(s) 2015.

  1. A semantic-based approach for querying linked data using natural language

    KAUST Repository

    Paredes-Valverde, Mario André s; Valencia-Garcí a, Rafael; Rodriguez-Garcia, Miguel Angel; Colomo-Palacios, Ricardo; Alor-Herná ndez, Giner

    2016-01-01

    The semantic Web aims to provide to Web information with a well-defined meaning and make it understandable not only by humans but also by computers, thus allowing the automation, integration and reuse of high-quality information across different applications. However, current information retrieval mechanisms for semantic knowledge bases are intended to be only used by expert users. In this work, we propose a natural language interface that allows non-expert users the access to this kind of information through formulating queries in natural language. The present approach uses a domain-independent ontology model to represent the question's structure and context. Also, this model allows determination of the answer type expected by the user based on a proposed question classification. To prove the effectiveness of our approach, we have conducted an evaluation in the music domain using LinkedBrainz, an effort to provide the MusicBrainz information as structured data on the Web by means of Semantic Web technologies. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.74 to 0.82 for a corpus of questions generated by a group of real-world end users. © The Author(s) 2015.

  2. Documentary languages and knowledge organization systems in the context of the semantic web

    Directory of Open Access Journals (Sweden)

    Marilda Lopes Ginez de Lara

    Full Text Available The aim of this study was to discuss the need for formal documentary languages as a condition for it to function in the Semantic Web. Based on a bibliographic review, Linked Open Data is presented as an initial condition for the operationalization of the Semantic Web, similar to the movement of Linked Open Vocabularies that aimed to promote interoperability among vocabularies. We highlight the Simple Knowledge Organization System format by analyzing its main characteristics and presenting the new standard ISO 25964-1/2:2011/2012 -Thesauri and interoperability with other vocabularies, that revises previous recommendations, adding requirements for the interoperability and mapping of vocabularies. We discuss conceptual problems in the formalization of vocabularies and the need to invest critically in its operationalization, suggesting alternatives to harness the mapping of vocabularies.

  3. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  4. Semantic and Conceptual Factors in Spanish-English Bilinguals' Processing of Lexical Categories in Their Two Languages

    Science.gov (United States)

    Gathercole, Virginia C. Mueller; Stadthagen-González, Hans; Pérez-Tattam, Rocío; Yava?, Feryal

    2016-01-01

    This study examines possible semantic interaction in fully fluent adult simultaneous and early second language (L2) bilinguals. Monolingual and bilingual speakers of Spanish and English (n = 144) were tested for their understanding of lexical categories that differed in their two languages. Simultaneous bilinguals came from homes in which Spanish…

  5. Investigating language lateralization during phonological and semantic fluency tasks using functional transcranial Doppler sonography

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Payne, Heather; MacSweeney, Mairéad

    2015-01-01

    Although there is consensus that the left hemisphere plays a critical role in language processing, some questions remain. Here we examine the influence of overt versus covert speech production on lateralization, the relationship between lateralization and behavioural measures of language performance and the strength of lateralization across the subcomponents of language. The present study used functional transcranial Doppler sonography (fTCD) to investigate lateralization of phonological and semantic fluency during both overt and covert word generation in right-handed adults. The laterality index (LI) was left lateralized in all conditions, and there was no difference in the strength of LI between overt and covert speech. This supports the validity of using overt speech in fTCD studies, another benefit of which is a reliable measure of speech production. PMID:24875468

  6. The Analysis of Figurative Language Used in the Lyric of Firework by Katy Perry (a Study of Semantic)

    OpenAIRE

    Hariyanto, Hariyanto

    2017-01-01

    Figurative language is a part of semantic. This research analyzed the figurative language used in the lyric of firework by Katy Perry. The aims of this research are to find out the figurative languages used in the lyric of firework and to analyze the contextual meaning of figurative language used in that song. It is expected the result of this study will be useful for the reader especially in knowing what figurative language is and what kinds of figurative language are. The design of this res...

  7. The development of argument and improvement of semantic and pragmatic aspects of oral language by investigative

    Directory of Open Access Journals (Sweden)

    Wanessa H. Pickina Silva Suzuki

    2016-06-01

    Full Text Available The Language is the mechanism that allows us to share all the knowledge acquired in the learning and teaching process. Through it the scientific knowledge is arranged. The language’s communicative property is the most meaningful ability which allows man to report, to reason or to refute an idea. The teaching strategies which aim to develop scientific enculturation, bring science closer to the school routine, introducing distinguished conceptions that favour problematic teaching. The investigative methodologies lead the student to develop communication abilities, specially reasoning, in scientific speech perspective. Knowing about this difficulty that the students, specifically the ones of the seventh grade of the primary school of brazil’s state school, have in understanding certain parts of the contents in the subject of Science, most of the times lacking in a concrete referential idea, the theme of microorganisms was chosen to substantiate the investigative activities proposed. Therefore, the goal of this paper is to identify and to analyse the communicative abilities of language, regarding argumentation and its development, as the improvement of semantic and pragmatic aspects of language. To organize this activity of investigation, the approach of the National Research Council was used, as well as the assumption idealized by S. Toulmin, regarding the argument structure. The pragmatic and semantic abilities were referred through the theory of the Speech Acts. Analyzing the obtained data, it was possible to ascertain that some students were benefited with this methodology and were able to absorb concepts that substantiated and contributed to the structural quality of the argument. Besides that, it was realized that the improvement of the semantic and pragmatic aspects framed an efficient communication.

  8. Evaluation of semantic aspect of language in students of ordinary, integrated and special schools

    Directory of Open Access Journals (Sweden)

    Ali Ghorbani

    2012-06-01

    Full Text Available Background and Aim: Children with severe and profound hearing loss have difficulties in communicating with others and educating at school. Effects of learning environment on children's language skills have been recently focused and educating those students in ordinary schools has been proposed. According to this view, we compared perception of antonyms and synonyms as a semantic aspect of language in students of ordinary, integrated and special schools.Methods: It was an analytic cross-sectional study. Three groups of students were enrolled: normal-hearing students of ordinary schools and hearing-loss students of integrated and specials schools. Each group consisted of 25 students in fifth grade of elementary schools in Tehran city. Two written tests were used. Subjects wrote synonyms and antonyms for each word in the tests.Results: Results denoted significant differences between scores of normal-hearing and hearing-loss students and also between hearing-loss students of integrated schools and hearing-loss students of special schools (p<0.05. In all three groups of the students, perception of antonyms was better than antonyms (p<0.001. Speech processing rate in normal-hearing students were higher than both groups of hearing-loss students (p<0.001.Conclusion: The differences between normal-hearing and hearing-loss students shows that similar to other language skills, perception of synonyms and antonyms as a semantic aspect of speech is related to the hearing conditions and type of education. Moreover, the differences between two groups of hearing-loss students represent that speech stimulants and interaction with normal-hearing children could improve semantic aspect of speech in hearing-loss students.

  9. Language production in a shared task: Cumulative semantic interference from self- and other-produced context words

    OpenAIRE

    Hoedemaker, R.; Ernst, J.; Meyer, A.; Belke, E.

    2017-01-01

    This study assessed the effects of semantic context in the form of self-produced and other-produced words on subsequent language production. Pairs of participants performed a joint picture naming task, taking turns while naming a continuous series of pictures. In the single-speaker version of this paradigm, naming latencies have been found to increase for successive presentations of exemplars from the same category, a phenomenon known as Cumulative Semantic Interference (CSI). As expected, th...

  10. How appropriate are the English language test requirements for non-UK-trained nurses? A qualitative study of spoken communication in UK hospitals.

    Science.gov (United States)

    Sedgwick, Carole; Garner, Mark

    2017-06-01

    Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical

  11. Ways of making-sense: Local gamma synchronization reveals differences between semantic processing induced by music and language.

    Science.gov (United States)

    Barraza, Paulo; Chavez, Mario; Rodríguez, Eugenio

    2016-01-01

    Similar to linguistic stimuli, music can also prime the meaning of a subsequent word. However, it is so far unknown what is the brain dynamics underlying the semantic priming effect induced by music, and its relation to language. To elucidate these issues, we compare the brain oscillatory response to visual words that have been semantically primed either by a musical excerpt or by an auditory sentence. We found that semantic violation between music-word pairs triggers a classical ERP N400, and induces a sustained increase of long-distance theta phase synchrony, along with a transient increase of local gamma activity. Similar results were observed after linguistic semantic violation except for gamma activity, which increased after semantic congruence between sentence-word pairs. Our findings indicate that local gamma activity is a neural marker that signals different ways of semantic processing between music and language, revealing the dynamic and self-organized nature of the semantic processing. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Student Perception Problems in Using Historical Language: Semantic/Phonetic Connotation and Concept Loss

    Directory of Open Access Journals (Sweden)

    Erhan METİN

    2012-05-01

    , literature is reviewed, observation and interview techniques are used. In this study, the students in secondary schools are observed in history classes to see how they use historical language. Moreover, the relationship between history education and language is analyzed from student perspective thus, perception problems which emerge while the students use historical language are identified. The results about these perception problems, semantic connotation and phonetic connotation, which are identified and defined in this study, are illustrated. The study is based on the observations of 168 9-grade students in four different schools. Student-centered language problems which are identified according to the results of data collected and mentioned in detail in the study are defined as semantic connotation, phonetic connotation and concept loss. The connotation problem in this study is not being able to associate definite or specific meanings with words, historical names and concepts exactly. The meanings of the words differ according to the contexts they are used in and also to the contexts the speaker and the listener encounter them. When the words are used, they evoke the previous contexts the listener used them and these connotations are the possible meanings that the listener may understand. These results may explain secondary school students’ language problems in history classes. However, is should never be forgotten that history education is a part of life. Therefore, history education contains some things from human life. We can see this in the students’ use of historical language. In sum, in this study language problems in history education are emphasized. Moreover, it is revealed that history teachers play a significant role in developing students’ perception by enhancing the number of language sources used. Thus, it is aimed that students are able to analyze the past with its all richness and complexity. Students’ perception problems in using historical

  13. Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part II: Reduction Semantics and Abstract Machines

    DEFF Research Database (Denmark)

    Biernacka, Malgorzata; Danvy, Olivier

    2009-01-01

    We present a context-sensitive reduction semantics for a lambmda-calculus with explicit substitutions and we show that the functional implementation of this small-step semantics mechanically corresponds to that of the abstract machine for Core Scheme presented by Clinger at PLDI'98, including fir...

  14. Prediction signatures in the brain: Semantic pre-activation during language comprehension

    Directory of Open Access Journals (Sweden)

    Burkhard Maess

    2016-11-01

    Full Text Available There is broad agreement that context-based predictions facilitate lexical-semantic processing. A robust index of semantic prediction during language comprehension is an evoked response, known as the N400, whose amplitude is modulated as a function of semantic context. However, the underlying neural mechanisms that utilize relations of the prior context and the embedded word within it are largely unknown. We measured magnetoencephalography (MEG data while participants were listening to simple German sentences in which the verbs were either highly predictive for the occurrence of a particular noun (i.e., provided context or not. The identical set of nouns was presented in both conditions. Hence, differences for the evoked responses of the nouns can only be due to differences in the earlier context. We observed a reduction of the N400 response for highly predicted nouns. Interestingly, the opposite pattern was observed for the preceding verbs: Highly predictive (that is more informative verbs yielded stronger neural magnitude compared to less predictive verbs. A negative correlation between the N400 effect of the verb and that of the noun was found in a distributed brain network, indicating an integral relation between the predictive power of the verb and the processing of the subsequent noun. This network consisted of left hemispheric superior and middle temporal areas and a subcortical area; the parahippocampus. Enhanced activity for highly predictive relative to less predictive verbs, likely reflects establishing semantic features associated with the expected nouns, that is a pre-activation of the expected nouns.

  15. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  16. How many kinds of reasoning? Inference, probability, and natural language semantics.

    Science.gov (United States)

    Lassiter, Daniel; Goodman, Noah D

    2015-03-01

    The "new paradigm" unifying deductive and inductive reasoning in a Bayesian framework (Oaksford & Chater, 2007; Over, 2009) has been claimed to be falsified by results which show sharp differences between reasoning about necessity vs. plausibility (Heit & Rotello, 2010; Rips, 2001; Rotello & Heit, 2009). We provide a probabilistic model of reasoning with modal expressions such as "necessary" and "plausible" informed by recent work in formal semantics of natural language, and show that it predicts the possibility of non-linear response patterns which have been claimed to be problematic. Our model also makes a strong monotonicity prediction, while two-dimensional theories predict the possibility of reversals in argument strength depending on the modal word chosen. Predictions were tested using a novel experimental paradigm that replicates the previously-reported response patterns with a minimal manipulation, changing only one word of the stimulus between conditions. We found a spectrum of reasoning "modes" corresponding to different modal words, and strong support for our model's monotonicity prediction. This indicates that probabilistic approaches to reasoning can account in a clear and parsimonious way for data previously argued to falsify them, as well as new, more fine-grained, data. It also illustrates the importance of careful attention to the semantics of language employed in reasoning experiments. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. POPPER, a simple programming language for probabilistic semantic inference in medicine.

    Science.gov (United States)

    Robson, Barry

    2015-01-01

    Our previous reports described the use of the Hyperbolic Dirac Net (HDN) as a method for probabilistic inference from medical data, and a proposed probabilistic medical Semantic Web (SW) language Q-UEL to provide that data. Rather like a traditional Bayes Net, that HDN provided estimates of joint and conditional probabilities, and was static, with no need for evolution due to "reasoning". Use of the SW will require, however, (a) at least the semantic triple with more elaborate relations than conditional ones, as seen in use of most verbs and prepositions, and (b) rules for logical, grammatical, and definitional manipulation that can generate changes in the inference net. Here is described the simple POPPER language for medical inference. It can be automatically written by Q-UEL, or by hand. Based on studies with our medical students, it is believed that a tool like this may help in medical education and that a physician unfamiliar with SW science can understand it. It is here used to explore the considerable challenges of assigning probabilities, and not least what the meaning and utility of inference net evolution would be for a physician. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Rapid modulation of spoken word recognition by visual primes.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  19. Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part II: Reduction Semantics and Abstract Machines

    DEFF Research Database (Denmark)

    Biernacka, Malgorzata; Danvy, Olivier

    2008-01-01

    We present a context-sensitive reduction semantics for a lambda-calculus with explicit substitutions and store and we show that the functional implementation of this small-step semantics mechanically corresponds to that of an abstract machine. This abstract machine is very close to the abstract m...... machine for Core Scheme presented by Clinger at PLDI'98. This lambda-calculus with explicit substitutions and store therefore aptly accounts for Core Scheme....

  20. Cross-Language Plagiarism Detection System Using Latent Semantic Analysis and Learning Vector Quantization

    Directory of Open Access Journals (Sweden)

    Anak Agung Putri Ratna

    2017-06-01

    Full Text Available Computerized cross-language plagiarism detection has recently become essential. With the scarcity of scientific publications in Bahasa Indonesia, many Indonesian authors frequently consult publications in English in order to boost the quantity of scientific publications in Bahasa Indonesia (which is currently rising. Due to the syntax disparity between Bahasa Indonesia and English, most of the existing methods for automated cross-language plagiarism detection do not provide satisfactory results. This paper analyses the probability of developing Latent Semantic Analysis (LSA for a computerized cross-language plagiarism detector for two languages with different syntax. To improve performance, various alterations in LSA are suggested. By using a linear vector quantization (LVQ classifier in the LSA and taking into account the Frobenius norm, output has reached up to 65.98% in accuracy. The results of the experiments showed that the best accuracy achieved is 87% with a document size of 6 words, and the document definition size must be kept below 10 words in order to maintain high accuracy. Additionally, based on experimental results, this paper suggests utilizing the frequency occurrence method as opposed to the binary method for the term–document matrix construction.

  1. All Together Now: Disentangling Semantics and Pragmatics with Together in Child and Adult Language.

    Science.gov (United States)

    Syrett, Kristen; Musolino, Julien

    The way in which an event is packaged linguistically can be informative about the number of participants in the event and the nature of their participation. At times, however, a sentence is ambiguous, and pragmatic information weighs in to favor one interpretation over another. Whereas adults may readily know how to pick up on such cues to meaning, children - who are generally naïve to such pragmatic nuances - may diverge and access a broader range of interpretations, or one disfavored by adults. A number of cases come to us from a now well-established body of research on scalar implicatures and scopal ambiguity. Here, we complement this previous work with a previously uninvestigated example of the semantic-pragmatic divide in language development arising from the interpretation of sentences with pluralities and together . Sentences such as Two boys lifted a block (together) allow for either a Collective or a Distributive interpretation (one pushing event vs. two spatiotemporally coordinated events). We show experimentally that children allow both interpretations in sentences with together , whereas adults rule out the Distributive interpretation without further contextual motivation. However, children appear to be guided by their semantics in the readings they access, since they do not allow readings that are semantically barred. We argue that they are unaware of the pragmatic information adults have at their fingertips, such as the conversational implicatures arising from the presence of a modifier, the probability of its occurrence being used to signal a particular interpretation among a set of alternatives, and knowledge of the possible lexical alternatives.

  2. Language production in a shared task: Cumulative semantic interference from self- and other-produced context words

    NARCIS (Netherlands)

    Hoedemaker, R.S.; Ernst, J.; Meyer, A.S.; Belke, E.

    2017-01-01

    This study assessed the effects of semantic context in the form of self-produced and other-produced words on subsequent language production. Pairs of participants performed a joint picture naming task, taking turns while naming a continuous series of pictures. In the single-speaker version of this

  3. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits for fiscal...

  4. Social Security Administration - Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from fiscal...

  5. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  6. The semantics of Chemical Markup Language (CML for computational chemistry : CompChem

    Directory of Open Access Journals (Sweden)

    Phadungsukanan Weerapong

    2012-08-01

    Full Text Available Abstract This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  7. Automatic Semantic Orientation of Adjectives for Indonesian Language Using PMI-IR and Clustering

    Science.gov (United States)

    Riyanti, Dewi; Arif Bijaksana, M.; Adiwijaya

    2018-03-01

    We present our work in the area of sentiment analysis for Indonesian language. We focus on bulding automatic semantic orientation using available resources in Indonesian. In this research we used Indonesian corpus that contains 9 million words from kompas.txt and tempo.txt that manually tagged and annotated with of part-of-speech tagset. And then we construct a dataset by taking all the adjectives from the corpus, removing the adjective with no orientation. The set contained 923 adjective words. This systems will include several steps such as text pre-processing and clustering. The text pre-processing aims to increase the accuracy. And finally clustering method will classify each word to related sentiment which is positive or negative. With improvements to the text preprocessing, can be achieved 72% of accuracy.

  8. PREDICATE OF ‘MANGAN’ IN SASAK LANGUAGE: A STUDY OF NATURAL SEMANTIC METALANGUAGE

    Directory of Open Access Journals (Sweden)

    Sarwadi

    2016-11-01

    Full Text Available The aim of this study were to know semantic meaning of predicate Ngajengan, Daharan, Ngelor, Mangan, Ngrodok (Eating, Kaken (Eating, Suap, Bejijit, (Eating Bekeruak (Eating, Ngerasak (Eating and Nyangklok (Eating. Besides that, to know the lexical meaning of each words and the function of words in every sentences especially the meaning of eating in Sasaknese language. The lexical meaning of Ngajengan, Daharan, Ngelor, Mangan, Ngrodok (Eating, Kaken (Eating, Suap, Bejijit, (Eating Bekeruak (Eating, Ngerasak (Eating and Nyangklok (Eating was doing something to eat but the differences of these words are usage in sentences. Besides that, the word usage based on the subject and object and there is predicate that need tool to state eat meals or food.

  9. The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem.

    Science.gov (United States)

    Phadungsukanan, Weerapong; Kraft, Markus; Townsend, Joe A; Murray-Rust, Peter

    2012-08-07

    : This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  10. S3QL: A distributed domain specific language for controlled semantic integration of life sciences data

    Directory of Open Access Journals (Sweden)

    de Lencastre Hermínia

    2011-07-01

    Full Text Available Abstract Background The value and usefulness of data increases when it is explicitly interlinked with related data. This is the core principle of Linked Data. For life sciences researchers, harnessing the power of Linked Data to improve biological discovery is still challenged by a need to keep pace with rapidly evolving domains and requirements for collaboration and control as well as with the reference semantic web ontologies and standards. Knowledge organization systems (KOSs can provide an abstraction for publishing biological discoveries as Linked Data without complicating transactions with contextual minutia such as provenance and access control. We have previously described the Simple Sloppy Semantic Database (S3DB as an efficient model for creating knowledge organization systems using Linked Data best practices with explicit distinction between domain and instantiation and support for a permission control mechanism that automatically migrates between the two. In this report we present a domain specific language, the S3DB query language (S3QL, to operate on its underlying core model and facilitate management of Linked Data. Results Reflecting the data driven nature of our approach, S3QL has been implemented as an application programming interface for S3DB systems hosting biomedical data, and its syntax was subsequently generalized beyond the S3DB core model. This achievement is illustrated with the assembly of an S3QL query to manage entities from the Simple Knowledge Organization System. The illustrative use cases include gastrointestinal clinical trials, genomic characterization of cancer by The Cancer Genome Atlas (TCGA and molecular epidemiology of infectious diseases. Conclusions S3QL was found to provide a convenient mechanism to represent context for interoperation between public and private datasets hosted at biomedical research institutions and linked data formalisms.

  11. Dynamics of Semantic and Word-Formation Subsystems of the Russian Language: Historical Dynamics of the Word Family

    Directory of Open Access Journals (Sweden)

    Olga Ivanovna Dmitrieva

    2015-09-01

    Full Text Available The article provides comprehensive justification of the principles and methods of the synchronic and diachronic research of word-formation subsystems of the Russian language. The authors also study the ways of analyzing historical dynamics of word family as the main macro-unit of word-formation system. In the field of analysis there is a family of words with the stem 'ход-' (the meaning of 'motion', word-formation of which is investigated in different periods of the Russian literary language. Significance of motion-verbs in the process of forming a language picture of the world determined the character and the structure of this word family as one of the biggest in the history of the Russian language. In the article a structural and semantic dynamics of the word family 'ход-' is depicted. The results of the study show that in the ancient period the prefixes of verbal derivatives were formed, which became the apex-branched derivational paradigms existing in modern Russian. The old Russian period of language development is characterized by the appearance of words with connotative meaning (with suffixes -ishk-, -ichn-, as well as the words with possessive semantics (with suffixes –ev-, -sk-. In this period the verbs with the postfix -cz also supplement the analyzed word family. The period of formation of the National Russian language was marked by the loss of a large number of abstract nouns and the appearance of neologisms from some old Russian abstract nouns. The studied family in the modern Russian language is characterized by the following processes: the appearance of terms, the active semantic derivation, the weakening of word-formation variability, the semantic differentiation of duplicate units, the development of subsystem of words with connotative meanings, and the preservation of derivatives in all functional styles.

  12. Semantic Differences of Definitional skills between Persian Speaking Children with Specific Language Impairment and Normal Language Developing Children

    Directory of Open Access Journals (Sweden)

    Mehri Mohammadi

    2011-07-01

    Full Text Available Objective: Linguistic and metalinguistic knowledge are the effective factors for definitional skills. This study investigated definitional skills both content and form in children with specific language impairment. Materials and Method: The participants were 32 Children in two groups of 16 SLI and 16 normal children, matched with age, sex and educational level. The SLI group was referred from Learning Difficulties Centers and Zarei Rehabilitation Center in Tehran, as well as the control group who was selected by randomized sampling from normal primary schools. The stimuli were 14 high frequency nouns from seven different categories. The reliability was calculated by interjudge agreement and the validity was assessed by content. Data was analyzed using independent T-test. Results: There were significant differences between mean scores of content and form of the definitional skills in two groups. The mean and SD scores of the content of word definition were M= 45.87, SD=12.22 in control group and M=33.18, SD= 17.60 for SLI one, out of possible 70 points (P= 0.025. The mean and SD scores of the form of word definition were M= 48.87, SD= 9.49 in control group and M= 38.18, SD= 12.85 for SLI one, out of 70 points (P= 0.012. Conclusion: Based on the results, it was concluded that, language problems of the SLI children may not let them semantic represention in order to form and present a complete process of word definition. Although this skill in children with SLI is inadequate, all the definitions given by SLI children were consistent with the categories of content and form of word definition used in this study. Therefore, an exact planning and intervention by speech and language pathologist can be effective for this skill. Linguistic intervention especially in semantic and grammatical aspects not only improves the definition of familiar words but also it might be useful for the definition of new words, consequently lead to educational and

  13. Montague semantics

    NARCIS (Netherlands)

    Janssen, T.M.V.

    2012-01-01

    Montague semantics is a theory of natural language semantics and of its relation with syntax. It was originally developed by the logician Richard Montague (1930-1971) and subsequently modified and extended by linguists, philosophers, and logicians. The most important features of the theory are its

  14. Comparative Study of the Passive Verb in Arabic and Persian Languages from the Perspective of Grammatical and Semantic

    Directory of Open Access Journals (Sweden)

    Mansooreh Zarkoob

    2012-11-01

    Full Text Available Abstract Verb is one the important categories and main elements of sentence which is sometimes divided in similar types in Arabic and Persian. One of the main types of verb existed in both languages is passive verb. Although this appellation is apparently common in both languages, it seems passive verbs are completely equivalent in both languages but since passive verb in the Persian language has been discussed from different aspects compared with Arabic, in this article we are looking for some answers to these questions that if we can find other structures apart from passive structure which are accounted as passive in their meanings? Which kind of relationship is there between grammatical and semantic structures of passive verb in both languages? What are grammatical and semantic differences and similarities of passive verb in Persian and Arabic languages? The results of this survey decreased translation errors of students. We also state this example as a result of this research that, not only there is an auxiliary verb in both languages is investigated as a passive-maker but also, there are some planar verbs in both languages and also voice changes are occurred in addition to inflection changes.

  15. Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content

    Science.gov (United States)

    Brouwer, Susanne; Van Engen, Kristin J.; Calandruccio, Lauren; Bradlow, Ann R.

    2012-01-01

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language. PMID:22352516

  16. Lexical and semantic ability in groups of children with cochlear implants, language impairment and autism spectrum disorder.

    Science.gov (United States)

    Löfkvist, Ulrika; Almkvist, Ove; Lyxell, Björn; Tallberg, Ing-Mari

    2014-02-01

    Lexical-semantic ability was investigated among children aged 6-9 years with cochlear implants (CI) and compared to clinical groups of children with language impairment (LI) and autism spectrum disorder (ASD) as well as to age-matched children with normal hearing (NH). In addition, the influence of age at implantation on lexical-semantic ability was investigated among children with CI. 97 children divided into four groups participated, CI (n=34), LI (n=12), ASD (n=12), and NH (n=39). A battery of tests, including picture naming, receptive vocabulary and knowledge of semantic features, was used for assessment. A semantic response analysis of the erroneous responses on the picture-naming test was also performed. The group of children with CI exhibited a naming ability comparable to that of the age-matched children with NH, and they also possessed a relevant semantic knowledge of certain words that they were unable to name correctly. Children with CI had a significantly better understanding of words compared to the children with LI and ASD, but a worse understanding than those with NH. The significant differences between groups remained after controlling for age and non-verbal cognitive ability. The children with CI demonstrated lexical-semantic abilities comparable to age-matched children with NH, while children with LI and ASD had a more atypical lexical-semantic profile and poorer sizes of expressive and receptive vocabularies. Dissimilar causes of neurodevelopmental processes seemingly affected lexical-semantic abilities in different ways in the clinical groups. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A Pilot Study of Telepractice for Teaching Listening and Spoken Language to Mandarin-Speaking Children with Congenital Hearing Loss

    Science.gov (United States)

    Chen, Pei-Hua; Liu, Ting-Wei

    2017-01-01

    Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…

  19. Spoken language and everyday functioning in 5-year-old children using hearing aids or cochlear implants.

    Science.gov (United States)

    Cupples, Linda; Ching, Teresa Yc; Button, Laura; Seeto, Mark; Zhang, Vicky; Whitfield, Jessica; Gunnourie, Miriam; Martin, Louise; Marnane, Vivienne

    2017-09-12

    This study investigated the factors influencing 5-year language, speech and everyday functioning of children with congenital hearing loss. Standardised tests including PLS-4, PPVT-4 and DEAP were directly administered to children. Parent reports on language (CDI) and everyday functioning (PEACH) were collected. Regression analyses were conducted to examine the influence of a range of demographic variables on outcomes. Participants were 339 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children's average receptive and expressive language scores were approximately 1 SD below the mean of typically developing children, and scores on speech production and everyday functioning were more than 1 SD below. Regression models accounted for 70-23% of variance in scores across different tests. Earlier CI switch-on and higher non-verbal ability were associated with better outcomes in most domains. Earlier HA fitting and use of oral communication were associated with better outcomes on directly administered language assessments. Severity of hearing loss and maternal education influenced outcomes of children with HAs. The presence of additional disabilities affected outcomes of children with CIs. The findings provide strong evidence for the benefits of early HA fitting and early CI for improving children's outcomes.

  20. Morphological Cues for Lexical Semantics

    National Research Council Canada - National Science Library

    Light, Marc

    1996-01-01

    Most natural language processing tasks require lexical semantic information such as verbal argument structure and selectional restrictions, corresponding nominal semantic class, verbal aspectual class...

  1. Semantic and translation priming from a first language to a second and back: Making sense of the findings.

    Science.gov (United States)

    Schoonbaert, Sofie; Duyck, Wouter; Brysbaert, Marc; Hartsuiker, Robert J

    2009-07-01

    The present study investigated cross-language priming effects with unique noncognate translation pairs. Unbalanced Dutch (first language [L1])-English (second language [L2]) bilinguals performed a lexical decision task in a masked priming paradigm. The results of two experiments showed significant translation priming from L1 to L2 (meisje-girl) and from L2 to L1 (girl-meisje), using two different stimulus onset asynchronies (SOAs) (250 and 100 msec). Although translation priming from L1 to L2 was significantly stronger than priming from L2 to L1, the latter was significant as well. Two further experiments with the same word targets showed significant cross-language semantic priming in both directions (jongen [boy]-girl; boy-meisje [girl]) and for both SOAs. These data suggest that L1 and L2 are represented by means of a similar lexico-semantic architecture in which L2 words are also able to rapidly activate semantic information, although to a lesser extent than L1 words are able to. This is consistent with models assuming quantitative rather than qualitative differences between L1 and L2 representations.

  2. Lexical-Semantic Organization in Bilingually Developing Deaf Children with ASL-Dominant Language Exposure: Evidence from a Repeated Meaning Association Task

    Science.gov (United States)

    Mann, Wolfgang; Sheng, Li; Morgan, Gary

    2016-01-01

    This study compared the lexical-semantic organization skills of bilingually developing deaf children in American Sign Language (ASL) and English with those of a monolingual hearing group. A repeated meaning-association paradigm was used to assess retrieval of semantic relations in deaf 6-10-year-olds exposed to ASL from birth by their deaf…

  3. The use of web ontology languages and other semantic web tools in drug discovery.

    Science.gov (United States)

    Chen, Huajun; Xie, Guotong

    2010-05-01

    To optimize drug development processes, pharmaceutical companies require principled approaches to integrate disparate data on a unified infrastructure, such as the web. The semantic web, developed on the web technology, provides a common, open framework capable of harmonizing diversified resources to enable networked and collaborative drug discovery. We survey the state of art of utilizing web ontologies and other semantic web technologies to interlink both data and people to support integrated drug discovery across domains and multiple disciplines. Particularly, the survey covers three major application categories including: i) semantic integration and open data linking; ii) semantic web service and scientific collaboration and iii) semantic data mining and integrative network analysis. The reader will gain: i) basic knowledge of the semantic web technologies; ii) an overview of the web ontology landscape for drug discovery and iii) a basic understanding of the values and benefits of utilizing the web ontologies in drug discovery. i) The semantic web enables a network effect for linking open data for integrated drug discovery; ii) The semantic web service technology can support instant ad hoc collaboration to improve pipeline productivity and iii) The semantic web encourages publishing data in a semantic way such as resource description framework attributes and thus helps move away from a reliance on pure textual content analysis toward more efficient semantic data mining.

  4. Changes in N400 Topography Following Intensive Speech Language Therapy for Individuals with Aphasia

    Science.gov (United States)

    Wilson, K. Ryan; O'Rourke, Heather; Wozniak, Linda A.; Kostopoulos, Ellina; Marchand, Yannick; Newman, Aaron J.

    2012-01-01

    Our goal was to characterize the effects of intensive aphasia therapy on the N400, an electrophysiological index of lexical-semantic processing. Immediately before and after 4 weeks of intensive speech-language therapy, people with aphasia performed a task in which they had to determine whether spoken words were a "match" or a "mismatch" to…

  5. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  6. DEMONIC programming: a computational language for single-particle equilibrium thermodynamics, and its formal semantics.

    Directory of Open Access Journals (Sweden)

    Samson Abramsky

    2015-11-01

    Full Text Available Maxwell's Demon, 'a being whose faculties are so sharpened that he can follow every molecule in its course', has been the centre of much debate about its abilities to violate the second law of thermodynamics. Landauer's hypothesis, that the Demon must erase its memory and incur a thermodynamic cost, has become the standard response to Maxwell's dilemma, and its implications for the thermodynamics of computation reach into many areas of quantum and classical computing. It remains, however, still a hypothesis. Debate has often centred around simple toy models of a single particle in a box. Despite their simplicity, the ability of these systems to accurately represent thermodynamics (specifically to satisfy the second law and whether or not they display Landauer Erasure, has been a matter of ongoing argument. The recent Norton-Ladyman controversy is one such example. In this paper we introduce a programming language to describe these simple thermodynamic processes, and give a formal operational semantics and program logic as a basis for formal reasoning about thermodynamic systems. We formalise the basic single-particle operations as statements in the language, and then show that the second law must be satisfied by any composition of these basic operations. This is done by finding a computational invariant of the system. We show, furthermore, that this invariant requires an erasure cost to exist within the system, equal to kTln2 for a bit of information: Landauer Erasure becomes a theorem of the formal system. The Norton-Ladyman controversy can therefore be resolved in a rigorous fashion, and moreover the formalism we introduce gives a set of reasoning tools for further analysis of Landauer erasure, which are provably consistent with the second law of thermodynamics.

  7. LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI

    Directory of Open Access Journals (Sweden)

    Süleyman Kaan YALÇIN (M.A.H.

    2008-12-01

    Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.

  8. Structural-Semantic Peculiarities of Derogatory Marked Ethnonyms of the Canadian, Australian and New Zealand English Language

    Directory of Open Access Journals (Sweden)

    Tsebrovskaya Tatyana Alexandrovna

    2016-06-01

    Full Text Available The article studies word formation of the derogatory marked ethnonyms (DME of the Canadian, Australian and New Zealand English Language. DME are classified according to the method of word formation, the type of semantic transfer and deliberate phonetic distortion. Selection of the analyzed units is made out of such lexicographical sources as online dictionary of colloquial vocabulary Urban Dictionary, online dictionary Oxford English Dictionary, Merriam-Webster Online: Dictionary and Thesaurus, ABBYY Lingvo, The Free Dictionary, Dictionary.com, electronic databases The Racial slur Database and Hatebase, lists of ethnonyms from online resources canadaka.com, fact-index.com and other sources of factual materials. The urgent character of the given article is caused by lack of scientific study of the ways of word formation of DME, particularly, the units of Canadian, Australian and New Zealand English. Separation of the criteria of their description and division into groups are considered to be important. The aim is to justify the linguistic phenomenon of DME through determining their structural and semantic characteristics in Canadian, Australian and New Zealand English. Achievement of the aim requires solving the following problems: 1 to identify the structural and semantic parameters of formation of DME; 2 to improve the structural and semantic classification of A.I. Hryshchenko for Canadian, Australian and New Zealand English.

  9. Reviewing the design of DAML+OIL : An ontology language for the Semantic Web

    NARCIS (Netherlands)

    Horrocks, Ian; Patel-Schneider, Peter F.; Van Harmelen, Frank

    2002-01-01

    In the current "Syntactic Web", uninterpreted syntactic constructs are given meaning only by private off-line agreements that are inaccessible to computers. In the Semantic Web vision, this is replaced by a web where both data and its semantic definition are accessible and manipulable by computer

  10. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.

    2013-05-24

    Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XML files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.

  11. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  12. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    Science.gov (United States)

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  13. [Language observation protocol for teachers in pre-school education. Effectiveness in the detection of semantic and morphosyntactic difficulties].

    Science.gov (United States)

    Ygual-Fernández, Amparo; Cervera-Merida, José F; Baixauli-Fortea, Inmaculada; Meliá-De Alba, Amanda

    2011-03-01

    A number of studies have shown that teachers are capable of recognising pupils with language difficulties if they have suitable guidelines or guidance. To determine the effectiveness of an observation-based protocol for pre-school education teachers in the detection of phonetic-phonological, semantic and morphosyntactic difficulties. The sample consisted of 175 children from public and state-subsidised schools in Valencia and its surrounding province, together with their teachers. The children were aged between 3 years and 6 months and 5 years and 11 months. The protocol that was used asks for information about pronunciation skills (intelligibility, articulation), conversational skills (with adults, with peers), literal understanding of sentences, grammatical precision, expression through discourse, lexical knowledge and semantics. There was a significant correlation between the teachers' observations and the criterion scores on intelligibility, literal understanding of sentences, grammatical expression and lexical richness, but not in the observations concerning articulation and verbal reasoning, which were more difficult for the teachers to judge. In general, the observation protocol proved to be effective, it guided the teachers in their observations and it asked them suitable questions about linguistic data that were relevant to the determination of difficulties in language development. The use of this protocol can be an effective strategy for collecting information for use by speech therapists and school psychologists in the early detection of children with language development problems.

  14. Use of Automated Scoring in Spoken Language Assessments for Test Takers with Speech Impairments. Research Report. ETS RR-17-42

    Science.gov (United States)

    Loukina, Anastassia; Buzick, Heather

    2017-01-01

    This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…

  15. Towards Universal Semantic Tagging

    NARCIS (Netherlands)

    Abzianidze, Lasha; Bos, Johan

    2017-01-01

    The paper proposes the task of universal semantic tagging---tagging word tokens with language-neutral, semantically informative tags. We argue that the task, with its independent nature, contributes to better semantic analysis for wide-coverage multilingual text. We present the initial version of

  16. Ontology Language to Support Description of Experiment Control System Semantics, Collaborative Knowledge-Base Design and Ontology Reuse

    International Nuclear Information System (INIS)

    Gyurjyan, Vardan; Abbott, D.; Heyes, G.; Jastrzembski, E.; Moffit, B.; Timmer, C.; Wolin, E.

    2009-01-01

    In this paper we discuss the control domain specific ontology that is built on top of the domain-neutral Resource Definition Framework (RDF). Specifically, we will discuss the relevant set of ontology concepts along with the relationships among them in order to describe experiment control components and generic event-based state machines. Control Oriented Ontology Language (COOL) is a meta-data modeling language that provides generic means for representation of physics experiment control processes and components, and their relationships, rules and axioms. It provides a semantic reference frame that is useful for automating the communication of information for configuration, deployment and operation. COOL has been successfully used to develop a complete and dynamic knowledge-base for experiment control systems, developed using the AFECS framework.

  17. Spoken Grammar for Chinese Learners

    Institute of Scientific and Technical Information of China (English)

    徐晓敏

    2013-01-01

    Currently, the concept of spoken grammar has been mentioned among Chinese teachers. However, teach-ers in China still have a vague idea of spoken grammar. Therefore this dissertation examines what spoken grammar is and argues that native speakers’ model of spoken grammar needs to be highlighted in the classroom teaching.

  18. Forms of encoded pragmatic meaning: semantic prosody. A lexicographic perspective.

    Directory of Open Access Journals (Sweden)

    Mojca Šorli

    2014-03-01

    Full Text Available Abstract – The present paper focuses on ways in which the pragmatic (functional meaning that arises from various contextual features, known in corpus linguistics as semantic prosody (Sinclair 1996, 2004; Louw 1993, etc. can become an integral part of lexicographic descriptions. This is especially important for the treatment of phraseology and idiomatics. The workings of semantic prosody are a good example of the ways pragmatic meaning exploits linguistic means to be codified in the text. We thus investigate the meaning that can only be studied in context, as it is completely dependent on collocation, i.e., syntagmatic relations, and therefore cannot be attributed solely to a concrete word form. Corpus analysis has yielded significant results in areas such as the lexicographic treatment of semantic prosody. We believe that in order to improve teaching pragmatics in all its complexity, it is necessary to recognise and assess various aspects of pragmatic meaning both in written and spoken language. Second/foreign language teaching/learning in particular has been strongly dependent on the inclusion of relevant information in dictionaries, in which, traditionally, pragmatic aspects of meaning have been largely neglected. Language technologies have enabled us both to study the subtleties of pragmatic meaning and to design accurate and more user-friendly (pedagogical dictionaries. We will attempt to demonstrate the value of explicit description of functional pragmatic meaning, i.e. semantic prosody, as implemented in the Slovene Lexical Database (2008-2012. A brief overview of the theoretical background is first provided, after which we describe the definition strategies employed to include pragmatics, as well as presenting a case study and arguing that explicating semantic prosody is crucial in developing pragmatic competence in (young/foreign language learners. Keywords: semantic prosody; pragmatics; lexicographic description; dictionary; lexical

  19. The tug of war between phonological, semantic and shape information in language-mediated visual search

    NARCIS (Netherlands)

    Hüttig, F.; McQueen, J.M.

    2007-01-01

    Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with beker, 'beaker' for example, the display contained phonological (a beaver, bever),

  20. The Tug of War between Phonological, Semantic and Shape Information in Language-Mediated Visual Search

    Science.gov (United States)

    Huettig, Falk; McQueen, James M.

    2007-01-01

    Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with "beker," "beaker," for example, the display contained phonological (a beaver, "bever"), shape (a…

  1. Une Progression dans la Strategie Pedagogique pour assurer la Construction de Langage Oral L'ecole Maternelle [A Progression in Teaching Strategies to Ensure Oral Language Building in Nursery School].

    Science.gov (United States)

    Durand, C.

    1997-01-01

    Summarizes progressions between 2 and 6 years of age in children's power of concentration, ability to express ideas, build logical relationships, structure spoken words, and play with the semantic, phonetic, syntactical, and morphological aspects of oral language. Notes that the progression depends on the educator's interaction with the child.…

  2. Language production in a shared task: Cumulative Semantic Interference from self- and other-produced context words.

    Science.gov (United States)

    Hoedemaker, Renske S; Ernst, Jessica; Meyer, Antje S; Belke, Eva

    2017-01-01

    This study assessed the effects of semantic context in the form of self-produced and other-produced words on subsequent language production. Pairs of participants performed a joint picture naming task, taking turns while naming a continuous series of pictures. In the single-speaker version of this paradigm, naming latencies have been found to increase for successive presentations of exemplars from the same category, a phenomenon known as Cumulative Semantic Interference (CSI). As expected, the joint-naming task showed a within-speaker CSI effect, such that naming latencies increased as a function of the number of category exemplars named previously by the participant (self-produced items). Crucially, we also observed an across-speaker CSI effect, such that naming latencies slowed as a function of the number of category members named by the participant's task partner (other-produced items). The magnitude of the across-speaker CSI effect did not vary as a function of whether or not the listening participant could see the pictures their partner was naming. The observation of across-speaker CSI suggests that the effect originates at the conceptual level of the language system, as proposed by Belke's (2013) Conceptual Accumulation account. Whereas self-produced and other-produced words both resulted in a CSI effect on naming latencies, post-experiment free recall rates were higher for self-produced than other-produced items. Together, these results suggest that both speaking and listening result in implicit learning at the conceptual level of the language system but that these effects are independent of explicit learning as indicated by item recall. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Language-mediated visual orienting behavior in low and high literates

    Directory of Open Access Journals (Sweden)

    Falk eHuettig

    2011-10-01

    Full Text Available The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005 which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors. In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1. In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2 but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2, low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate

  4. Effectiveness of Semantic Therapy for Word-Finding Difficulties in Pupils with Persistent Language Impairments: A Randomized Control Trial

    Science.gov (United States)

    Ebbels, Susan H.; Nicoll, Hilary; Clark, Becky; Eachus, Beth; Gallagher, Aoife L.; Horniman, Karen; Jennings, Mary; McEvoy, Kate; Nimmo, Liz; Turner, Gail

    2012-01-01

    Background: Word-finding difficulties (WFDs) in children have been hypothesized to be caused at least partly by poor semantic knowledge. Therefore, improving semantic knowledge should decrease word-finding errors. Previous studies of semantic therapy for WFDs are inconclusive. Aims: To investigate the effectiveness of semantic therapy for…

  5. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language.

    Science.gov (United States)

    de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D

    2013-05-24

    Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.

  6. Complications of Translating the Meanings of the Holy Qur'an at Word Level in the English Language in Relation to Frame Semantic Theory

    Science.gov (United States)

    Balla, Asjad Ahmed Saeed; Siddiek, Ahmed Gumaa

    2017-01-01

    The present study is an attempt to investigate the problems resulting from the lexical choice in the translation of the Holy Qur'an to emphasize the importance of the theory of "Frame Semantics" in the translation process. It has been conducted with the aim of measuring the difference in concept between the two languages Arabic and…

  7. Natural Language Processing (NLP), Machine Learning (ML), and Semantics in Polar Science

    Science.gov (United States)

    Duerr, R.; Ramdeen, S.

    2017-12-01

    One of the interesting features of Polar Science is that it historically has been extremely interdisciplinary, encompassing all of the physical and social sciences. Given the ubiquity of specialized terminology in each field, enabling researchers to find, understand, and use all of the heterogeneous data needed for polar research continues to be a bottleneck. Within the informatics community, semantics has broadly accepted as a solution to these problems, yet progress in developing reusable semantic resources has been slow. The NSF-funded ClearEarth project has been adapting the methods and tools from other communities such as Biomedicine to the Earth sciences with the goal of enhancing progress and the rate at which the needed semantic resources can be created. One of the outcomes of the project has been a better understanding of the differences in the way linguists and physical scientists understand disciplinary text. One example of these differences is the tendency for each discipline and often disciplinary subfields to expend effort in creating discipline specific glossaries where individual terms often are comprised of more than one word (e.g., first-year sea ice). Often each term in a glossary is imbued with substantial contextual or physical meaning - meanings which are rarely explicitly called out within disciplinary texts; meaning which are therefore not immediately accessible to those outside that discipline or subfield; meanings which can often be represented semantically. Here we show how recognition of these difference and the use of glossaries can be used to speed up the annotation processes endemic to NLP, enable inter-community recognition and possible reconciliation of terminology differences. A number of processes and tools will be described, as will progress towards semi-automated generation of ontology structures.

  8. Single-Word Predictions of Upcoming Language During Comprehension: Evidence from the Cumulative Semantic Interference Task

    Science.gov (United States)

    Kleinman, Daniel; Runnqvist, Elin; Ferreira, Victor S.

    2015-01-01

    Comprehenders predict upcoming speech and text on the basis of linguistic input. How many predictions do comprehenders make for an upcoming word? If a listener strongly expects to hear the word “sock”, is the word “shirt” partially expected as well, is it actively inhibited, or is it ignored? The present research addressed these questions by measuring the “downstream” effects of prediction on the processing of subsequently presented stimuli using the cumulative semantic interference paradigm. In three experiments, subjects named pictures (sock) that were presented either in isolation or after strongly constraining sentence frames (“After doing his laundry, Mark always seemed to be missing one…”). Naming sock slowed the subsequent naming of the picture shirt – the standard cumulative semantic interference effect. However, although picture naming was much faster after sentence frames, the interference effect was not modulated by the context (bare vs. sentence) in which either picture was presented. According to the only model of cumulative semantic interference that can account for such a pattern of data, this indicates that comprehenders pre-activated and maintained the pre-activation of best sentence completions (sock) but did not maintain the pre-activation of less likely completions (shirt). Thus, comprehenders predicted only the most probable completion for each sentence. PMID:25917550

  9. Mapping lexical-semantic networks and determining hemispheric language dominance: Do task design, sex, age, and language performance make a difference?

    Science.gov (United States)

    Chang, Yu-Hsuan A; Javadi, Sogol S; Bahrami, Naeim; Uttarwar, Vedang S; Reyes, Anny; McDonald, Carrie R

    2018-04-01

    Blocked and event-related fMRI designs are both commonly used to localize language networks and determine hemispheric dominance in research and clinical settings. We compared activation profiles on a semantic monitoring task using one of the two designs in a total of 43 healthy individual to determine whether task design or subject-specific factors (i.e., age, sex, or language performance) influence activation patterns. We found high concordance between the two designs within core language regions, including the inferior frontal, posterior temporal, and basal temporal region. However, differences emerged within inferior parietal cortex. Subject-specific factors did not influence activation patterns, nor did they interact with task design. These results suggest that despite high concordance within perisylvian regions that are robust to subject-specific factors, methodological differences between blocked and event-related designs may contribute to parietal activations. These findings provide important information for researchers incorporating fMRI results into meta-analytic studies, as well as for clinicians using fMRI to guide pre-surgical planning. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Automatic Item Generation via Frame Semantics: Natural Language Generation of Math Word Problems.

    Science.gov (United States)

    Deane, Paul; Sheehan, Kathleen

    This paper is an exploration of the conceptual issues that have arisen in the course of building a natural language generation (NLG) system for automatic test item generation. While natural language processing techniques are applicable to general verbal items, mathematics word problems are particularly tractable targets for natural language…

  11. Foundations of semantic web technologies

    CERN Document Server

    Hitzler, Pascal; Rudolph, Sebastian

    2009-01-01

    The Quest for Semantics Building Models Calculating with Knowledge Exchanging Information Semanic Web Technologies RESOURCE DESCRIPTION LANGUAGE (RDF)Simple Ontologies in RDF and RDF SchemaIntroduction to RDF Syntax for RDF Advanced Features Simple Ontologies in RDF Schema Encoding of Special Data Structures An ExampleRDF Formal Semantics Why Semantics? Model-Theoretic Semantics for RDF(S) Syntactic Reasoning with Deduction Rules The Semantic Limits of RDF(S)WEB ONTOLOGY LANGUAGE (OWL) Ontologies in OWL OWL Syntax and Intuitive Semantics OWL Species The Forthcoming OWL 2 StandardOWL Formal Sem

  12. Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J. P.

    2018-01-01

    Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…

  13. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available Computer Science 81 ( 2016 ) 128 – 135 5th Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...

  14. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  15. When novel sentences spoken or heard for the first time in the history of the universe are not enough: toward a dual-process model of language.

    Science.gov (United States)

    Van Lancker Sidtis, Diana

    2004-01-01

    Although interest in the language sciences was previously focused on newly created sentences, more recently much attention has turned to the importance of formulaic expressions in normal and disordered communication. Also referred to as formulaic expressions and made up of speech formulas, idioms, expletives, serial and memorized speech, slang, sayings, clichés, and conventional expressions, non-propositional language forms a large proportion of every speaker's competence, and may be differentially disturbed in neurological disorders. This review aims to examine non-propositional speech with respect to linguistic descriptions, psycholinguistic experiments, sociolinguistic studies, child language development, clinical language disorders, and neurological studies. Evidence from numerous sources reveals differentiated and specialized roles for novel and formulaic verbal functions, and suggests that generation of novel sentences and management of prefabricated expressions represent two legitimate and separable processes in language behaviour. A preliminary model of language behaviour that encompasses unitary and compositional properties and their integration in everyday language use is proposed. Integration and synchronizing of two disparate processes in language behaviour, formulaic and novel, characterizes normal communicative function and contributes to creativity in language. This dichotomy is supported by studies arising from other disciplines in neurology and psychology. Further studies are necessary to determine in what ways the various categories of formulaic expressions are related, and how these categories are processed by the brain. Better understanding of how non-propositional categories of speech are stored and processed in the brain can lead to better informed treatment strategies in language disorders.

  16. UML 2 Semantics and Applications

    CERN Document Server

    Lano, Kevin

    2009-01-01

    A coherent and integrated account of the leading UML 2 semantics work and the practical applications of UML semantics development With contributions from leading experts in the field, the book begins with an introduction to UML and goes on to offer in-depth and up-to-date coverage of: The role of semantics Considerations and rationale for a UML system model Definition of the UML system model UML descriptive semantics Axiomatic semantics of UML class diagrams The object constraint language Axiomatic semantics of state machines A coalgebraic semantic framework for reasoning about interaction des

  17. Enforced generative patterns for the specification of the syntax and semantics of visual languages

    OpenAIRE

    Bottoni, Paolo; Guerra, Esther; Lara, Juan de

    2008-01-01

    This is the author’s version of a work that was accepted for publication in Journal of Visual Languages and Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal of Visual Languages and Computing,19, 4 (2008) DO:...

  18. The role of domain-general frontal systems in language comprehension: evidence from dual-task interference and semantic ambiguity.

    Science.gov (United States)

    Rodd, Jennifer M; Johnsrude, Ingrid S; Davis, Matthew H

    2010-12-01

    Neuroimaging studies have shown that the left inferior frontal gyrus (LIFG) plays a critical role in semantic and syntactic aspects of speech comprehension. It appears to be recruited when listeners are required to select the appropriate meaning or syntactic role for words within a sentence. However, this region is also recruited during tasks not involving sentence materials, suggesting that the systems involved in processing ambiguous words within sentences are also recruited for more domain-general tasks that involve the selection of task-relevant information. We use a novel dual-task methodology to assess whether the cognitive system(s) that are engaged in selecting word meanings are also involved in non-sentential tasks. In Experiment 1, listeners were slower to decide whether a visually presented letter is in upper or lower case when the sentence that they are simultaneously listening to contains words with multiple meanings (homophones), compared to closely matched sentences without homophones. Experiment 2 indicates that this interference effect is not tied to the occurrence of the homophone itself, but rather occurs when listeners must reinterpret a sentence that was initially misparsed. These results suggest some overlap between the cognitive system involved in semantic disambiguation and the domain-general process of response selection required for the case-judgement task. This cognitive overlap may reflect neural overlap in the networks supporting these processes, and is consistent with the proposal that domain-general selection processes in inferior frontal regions are critical for language comprehension. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Cross-Cultural Differences in Beliefs and Practices that Affect the Language Spoken to Children: Mothers with Indian and Western Heritage

    Science.gov (United States)

    Simmons, Noreen; Johnston, Judith

    2007-01-01

    Background: Speech-language pathologists often advise families about interaction patterns that will facilitate language learning. This advice is typically based on research with North American families of European heritage and may not be culturally suited for non-Western families. Aims: The goal of the project was to identify differences in the…

  20. The interplay between mood and language comprehension: Evidence from P600 to semantic reversal anomalies

    NARCIS (Netherlands)

    Vissers, C.T.W.M.; Chwilla, U.G.; Egger, J.I.M.; Chwilla, D.J.

    2013-01-01

    Little is known about the relationship between language and emotion. Vissers et al. (2010) investigated the effects of mood on the processing of syntactic violations, as indexed by P600. An interaction was observed between mood and syntactic correctness for which three explanations were offered: one

  1. Semantic and stylistic pecularities of Slavicisms in language of modern newspapers

    Directory of Open Access Journals (Sweden)

    Жанар Кабдыляшымовна Киынова

    2012-12-01

    Full Text Available In article functioning of slavonicims in language of modern Kazakhstan and Russian newspapers is considered. On the basis of examples, ekstserpirovanny from modern newspapers, the informative picture about tendencies and regularities of modern word usage in mass media is given.

  2. Using a foundational ontology to investigate the semantics behind the concepts of the i* language

    NARCIS (Netherlands)

    Guizzardi-Silva Souza, R.; Franch, Xavier; Guizzardi, G.; Wieringa, Roelf J.; Castro, J.; Horkhoff, J.; Maiden, N.; Yu, E.

    In the past few years, the community that develops i* has become aware of the problem of having so many variants, since it makes it difficult for newcomers to learn how to use the language and even to experts to efficiently exchange knowledge and disseminate their proposals. Moreover, this problem

  3. Semantic Business Process Modeling

    OpenAIRE

    Markovic, Ivan

    2010-01-01

    This book presents a process-oriented business modeling framework based on semantic technologies. The framework consists of modeling languages, methods, and tools that allow for semantic modeling of business motivation, business policies and rules, and business processes. Quality of the proposed modeling framework is evaluated based on the modeling content of SAP Solution Composer and several real-world business scenarios.

  4. Semantic Web Primer

    NARCIS (Netherlands)

    Antoniou, Grigoris; Harmelen, Frank van

    2004-01-01

    The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this still emerging field, describing its key ideas, languages, and technologies. Suitable for use as a

  5. Pragmatics for formal semantics

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2011-01-01

    This tech talk describes how to write and how to inter-derive formal semantics for sequential programming languages. The progress reported here is (1) concrete guidelines to write each formal semantics to alleviate their proof obligations, and (2) simple calculational tools to obtain a formal...

  6. Sign language: an international handbook

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Woll, B.

    2012-01-01

    Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of

  7. Neural networks involved in learning lexical-semantic and syntactic information in a second language.

    Science.gov (United States)

    Mueller, Jutta L; Rueschemeyer, Shirley-Ann; Ono, Kentaro; Sugiura, Motoaki; Sadato, Norihiro; Nakamura, Akinori

    2014-01-01

    The present study used functional magnetic resonance imaging (fMRI) to investigate the neural correlates of language acquisition in a realistic learning environment. Japanese native speakers were trained in a miniature version of German prior to fMRI scanning. During scanning they listened to (1) familiar sentences, (2) sentences including a novel sentence structure, and (3) sentences containing a novel word while visual context provided referential information. Learning-related decreases of brain activation over time were found in a mainly left-hemispheric network comprising classical frontal and temporal language areas as well as parietal and subcortical regions and were largely overlapping for novel words and the novel sentence structure in initial stages of learning. Differences occurred at later stages of learning during which content-specific activation patterns in prefrontal, parietal and temporal cortices emerged. The results are taken as evidence for a domain-general network supporting the initial stages of language learning which dynamically adapts as learners become proficient.

  8. Semantic Search of Web Services

    Science.gov (United States)

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  9. Working memory predicts semantic comprehension in dichotic listening in older adults.

    Science.gov (United States)

    James, Philip J; Krishnan, Saloni; Aydelott, Jennifer

    2014-10-01

    Older adults have difficulty understanding spoken language in the presence of competing voices. Everyday social situations involving multiple simultaneous talkers may become increasingly challenging in later life due to changes in the ability to focus attention. This study examined whether individual differences in cognitive function predict older adults' ability to access sentence-level meanings in competing speech using a dichotic priming paradigm. Older listeners showed faster responses to words that matched the meaning of spoken sentences presented to the left or right ear, relative to a neutral baseline. However, older adults were more vulnerable than younger adults to interference from competing speech when the competing signal was presented to the right ear. This pattern of performance was strongly correlated with a non-auditory working memory measure, suggesting that cognitive factors play a key role in semantic comprehension in competing speech in healthy aging. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Cross-cultural differences in beliefs and practices that affect the language spoken to children: mothers with Indian and Western heritage.

    Science.gov (United States)

    Simmons, Noreen; Johnston, Judith

    2007-01-01

    Speech-language pathologists often advise families about interaction patterns that will facilitate language learning. This advice is typically based on research with North American families of European heritage and may not be culturally suited for non-Western families. The goal of the project was to identify differences in the beliefs and practices of Indian and Euro-Canadian mothers that would affect patterns of talk to children. A total of 47 Indian mothers and 51 Euro-Canadian mothers of preschool age children completed a written survey concerning child-rearing practices and beliefs, especially those about talk to children. Discriminant analyses indicated clear cross-cultural differences and produced functions that could predict group membership with a 96% accuracy rate. Items contributing most to these functions concerned the importance of family, perceptions of language learning, children's use of language in family and society, and interactions surrounding text. Speech-language pathologists who wish to adapt their services for families of Indian heritage should remember the centrality of the family, the likelihood that there will be less emphasis on early independence and achievement, and the preference for direct instruction.

  11. Informatics in radiology: RADTF: a semantic search-enabled, natural language processor-generated radiology teaching file.

    Science.gov (United States)

    Do, Bao H; Wu, Andrew; Biswal, Sandip; Kamaya, Aya; Rubin, Daniel L

    2010-11-01

    Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. ©RSNA, 2010

  12. Age-related changes in ERP components of semantic and syntactic processing in a verb final language

    Directory of Open Access Journals (Sweden)

    Jee Eun Sung

    2014-04-01

    Both syntactic and semantic violations elicited negativity effects at 300-500ms time window, and the negativity effects were slightly attenuated in the elderly group. The results suggested that Korean speakers may process a syntactic component of a case marker under the semantic frame integration, eliciting the negativity effects associated with semantic violations. Elderly adults showed attenuated effects compared to the young group, indicating age-related changes emerged during real-time sentence processing.

  13. Flow Logics and Operational Semantics

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis

    1998-01-01

    Flow logic is a “fast prototyping” approach to program analysis that shows great promise of being able to deal with a wide variety of languages and calculi for computation. However, seemingly innocent choices in the flow logic as well as in the operational semantics may inhibit proving the analys...... correct. Our main conclusion is that environment based semantics is more flexible than either substitution based semantics or semantics making use of structural congruences (like alpha-renaming)....

  14. Changes of right-hemispheric activation after constraint-induced, intensive language action therapy in chronic aphasia: fMRI evidence from auditory semantic processing1

    Science.gov (United States)

    Mohr, Bettina; Difrancesco, Stephanie; Harrington, Karen; Evans, Samuel; Pulvermüller, Friedemann

    2014-01-01

    The role of the two hemispheres in the neurorehabilitation of language is still under dispute. This study explored the changes in language-evoked brain activation over a 2-week treatment interval with intensive constraint induced aphasia therapy (CIAT), which is also called intensive language action therapy (ILAT). Functional magnetic resonance imaging (fMRI) was used to assess brain activation in perilesional left hemispheric and in homotopic right hemispheric areas during passive listening to high and low-ambiguity sentences and non-speech control stimuli in chronic non-fluent aphasia patients. All patients demonstrated significant clinical improvements of language functions after therapy. In an event-related fMRI experiment, a significant increase of BOLD signal was manifest in right inferior frontal and temporal areas. This activation increase was stronger for highly ambiguous sentences than for unambiguous ones. These results suggest that the known language improvements brought about by intensive constraint-induced language action therapy at least in part relies on circuits within the right-hemispheric homologs of left-perisylvian language areas, which are most strongly activated in the processing of semantically complex language. PMID:25452721

  15. The semantics of verbs in the dissolution and development of language.

    Science.gov (United States)

    Lahey, M; Feier, C D

    1982-03-01

    Evidence of the dissolution (DL) of verbs was examined in the written logs kept daily for 4 1/2 years by a woman (Mrs. W) who suffered from cerebral atrophy of unknown origin. Results were compared with similar analyses of written samples obtained from elementary school children (CWL), from normal adults (AWL) and from the literature on early oral language development (COL). The major finding of this study was that the sequence of the dissolution of verbs, in terms of the meanings expressed, mirrored the sequence of early acquisition. In the DL data reported here, Mrs. W continued to write about dynamic events after she ceased writing about stative events; in COL, children talk about dynamic events before stative events. Based on the AWL and CWL data, frequency of use is rejected as an explanation for the dominance and stability of dynamic relations in DL. Rather, it is suggested that the expression of dynamic relations may be less complex than the expression of stative relations due to possible differences in imagery and implication, but particularly due to the linguistic contexts in which each can be expressed.

  16. The duplication of the number of hands in Sign Language, and its semantic effects

    Directory of Open Access Journals (Sweden)

    André Nogueira Xavier

    2015-07-01

    Full Text Available According to Xavier (2006, there are signs in the Brazilian sign language (Libras that are typically developed with one hand, while others are made by both hands. However, recent studies document the communication, with both hands, of signs which usually use only one hand, and vice-versa (XAVIER, 2011; XAVIER, 2013; BARBOSA, 2013. This study aims the discussion of 27 Libras' signs which are typically made with one hand and that, when articulated with both hands, present changes in their meanings. The data discussed hereby, even though originally collected from observations of spontaneous signs from different Libras' users, have been elicited by two deaf patients in distinct sessions. After presenting the two forms of the selected signs (made with one and two hands, the patients were asked to create examples of use for each of the signs. The results proved that the duplication of hands, at least for the same signal in some cases, may happen due to different factors (such as plurality, aspect and intensity.

  17. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  18. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  19. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  20. Universal brain signature of proficient reading: Evidence from four contrasting languages.

    Science.gov (United States)

    Rueckl, Jay G; Paz-Alonso, Pedro M; Molfese, Peter J; Kuo, Wen-Jui; Bick, Atira; Frost, Stephen J; Hancock, Roeland; Wu, Denise H; Mencl, William Einar; Duñabeitia, Jon Andoni; Lee, Jun-Ren; Oliver, Myriam; Zevin, Jason D; Hoeft, Fumiko; Carreiras, Manuel; Tzeng, Ovid J L; Pugh, Kenneth R; Frost, Ram

    2015-12-15

    We propose and test a theoretical perspective in which a universal hallmark of successful literacy acquisition is the convergence of the speech and orthographic processing systems onto a common network of neural structures, regardless of how spoken words are represented orthographically in a writing system. During functional MRI, skilled adult readers of four distinct and highly contrasting languages, Spanish, English, Hebrew, and Chinese, performed an identical semantic categorization task to spoken and written words. Results from three complementary analytic approaches demonstrate limited language variation, with speech-print convergence emerging as a common brain signature of reading proficiency across the wide spectrum of selected languages, whether their writing system is alphabetic or logographic, whether it is opaque or transparent, and regardless of the phonological and morphological structure it represents.

  1. The effectiveness of semantic aspect of language on reading comprehension in a 4-year-old child with autistic spectrum disorder and hyperlexia

    Directory of Open Access Journals (Sweden)

    Atusa Rabiee

    2012-12-01

    Full Text Available Background: Hyperlexia is a super ability demonstrated by a very specific group of individuals with developmental disorders. This term is used to describe the children with high ability in word recognition, but low reading comprehension skills, despite the problems in language, cognitive and social skills. The purpose of this study was to assess the effectiveness of improving the semantic aspect of language (increase in understanding and expression vocabulary on reading comprehension in an autistic child with hyperlexia.Case: The child studied in this research was an autistic child with hyperlexia. At the beginning of this study he was 3 years and 11 months old. He could read, but his reading comprehension was low. In a period of 12 therapy session, understanding and expression of 160 words was taught to child. During this period, the written form of words was eliminated. After these sessions, the reading comprehension was re-assessed for the words that child could understand and express.Conclusion: Improving semantic aspect of language (understanding and expression of vocabulary increase reading comprehension of written words.

  2. Syntactic priming in American Sign Language.

    Science.gov (United States)

    Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I

    2015-01-01

    Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.

  3. Syntactic priming in American Sign Language.

    Directory of Open Access Journals (Sweden)

    Matthew L Hall

    Full Text Available Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL. Experiment 1 shows that second language (L2 signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect. Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming operates similarly in sign and speech.

  4. Jigsaw Semantics

    Directory of Open Access Journals (Sweden)

    Paul J. E. Dekker

    2010-12-01

    Full Text Available In the last decade the enterprise of formal semantics has been under attack from several philosophical and linguistic perspectives, and it has certainly suffered from its own scattered state, which hosts quite a variety of paradigms which may seem to be incompatible. It will not do to try and answer the arguments of the critics, because the arguments are often well-taken. The negative conclusions, however, I believe are not. The only adequate reply seems to be a constructive one, which puts several pieces of formal semantics, in particular dynamic semantics, together again. In this paper I will try and sketch an overview of tasks, techniques, and results, which serves to at least suggest that it is possible to develop a coherent overall picture of undeniably important and structural phenomena in the interpretation of natural language. The idea is that the concept of meanings as truth conditions after all provides an excellent start for an integrated study of the meaning and use of natural language, and that an extended notion of goal directed pragmatics naturally complements this picture. None of the results reported here are really new, but we think it is important to re-collect them.ReferencesAsher, Nicholas & Lascarides, Alex. 1998. ‘Questions in Dialogue’. Linguistics and Philosophy 23: 237–309.http://dx.doi.org/10.1023/A:1005364332007Borg, Emma. 2007. ‘Minimalism versus contextualism in semantics’. In Gerhard Preyer & Georg Peter (eds. ‘Context-Sensitivity and Semantic Minimalism’, pp. 339–359. Oxford: Oxford University Press.Cappelen, Herman & Lepore, Ernest. 1997. ‘On an Alleged Connection between Indirect Quotation and Semantic Theory’. Mind and Language 12: pp. 278–296.Cappelen, Herman & Lepore, Ernie. 2005. Insensitive Semantics. Oxford: Blackwell.http://dx.doi.org/10.1002/9780470755792Dekker, Paul. 2002. ‘Meaning and Use of Indefinite Expressions’. Journal of Logic, Language and Information 11: pp. 141–194

  5. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  6. Spoken grammar awareness raising: Does it affect the listening ability of Iranian EFL learners?

    Directory of Open Access Journals (Sweden)

    Mojgan Rashtchi

    2011-12-01

    Full Text Available Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and lack core spoken language features. The aim of the present study was to explore the question whether awareness of spoken grammar features could affect learners’ comprehension of real-life conversations. To this end, 45 university students in two intact classes participated in a listening course employing corpus-based materials. The instruction of the spoken grammar features to the experimental group was done overtly through awareness raising tasks, whereas the control group, though exposed to the same materials, was not provided with such tasks for learning the features. The results of the independent samples t tests revealed that the learners in the experimental group comprehended everyday conversations much better than those in the control group. Additionally, the highly positive views of spoken grammar held by the learners, which was elicited by means of a retrospective questionnaire, were generally comparable to those reported in the literature.

  7. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  8. Lexicon Optimization for Dutch Speech Recognition in Spoken Document Retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  9. Lexicon optimization for Dutch speech recognition in spoken document retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.; Dalsgaard, P.; Lindberg, B.; Benner, H.

    2001-01-01

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  10. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  11. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.J.; Swerts, M.G.J.; Theune, M.; Weegels, M.F.

    2001-01-01

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  12. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  13. Semantic Advertising

    OpenAIRE

    Zamanzadeh, Ben; Ashish, Naveen; Ramakrishnan, Cartic; Zimmerman, John

    2013-01-01

    We present the concept of Semantic Advertising which we see as the future of online advertising. Semantic Advertising is online advertising powered by semantic technology which essentially enables us to represent and reason with concepts and the meaning of things. This paper aims to 1) Define semantic advertising, 2) Place it in the context of broader and more widely used concepts such as the Semantic Web and Semantic Search, 3) Provide a survey of work in related areas such as context matchi...

  14. From spoken narratives to domain knowledge: mining linguistic data for medical image understanding.

    Science.gov (United States)

    Guo, Xuan; Yu, Qi; Alm, Cecilia Ovesdotter; Calvelli, Cara; Pelz, Jeff B; Shi, Pengcheng; Haake, Anne R

    2014-10-01

    Extracting useful visual clues from medical images allowing accurate diagnoses requires physicians' domain knowledge acquired through years of systematic study and clinical training. This is especially true in the dermatology domain, a medical specialty that requires physicians to have image inspection experience. Automating or at least aiding such efforts requires understanding physicians' reasoning processes and their use of domain knowledge. Mining physicians' references to medical concepts in narratives during image-based diagnosis of a disease is an interesting research topic that can help reveal experts' reasoning processes. It can also be a useful resource to assist with design of information technologies for image use and for image case-based medical education systems. We collected data for analyzing physicians' diagnostic reasoning processes by conducting an experiment that recorded their spoken descriptions during inspection of dermatology images. In this paper we focus on the benefit of physicians' spoken descriptions and provide a general workflow for mining medical domain knowledge based on linguistic data from these narratives. The challenge of a medical image case can influence the accuracy of the diagnosis as well as how physicians pursue the diagnostic process. Accordingly, we define two lexical metrics for physicians' narratives--lexical consensus score and top N relatedness score--and evaluate their usefulness by assessing the diagnostic challenge levels of corresponding medical images. We also report on clustering medical images based on anchor concepts obtained from physicians' medical term usage. These analyses are based on physicians' spoken narratives that have been preprocessed by incorporating the Unified Medical Language System for detecting medical concepts. The image rankings based on lexical consensus score and on top 1 relatedness score are well correlated with those based on challenge levels (Spearman correlation>0.5 and Kendall

  15. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  16. How semantic deficits in schizotypy help understand language and thought disorders in schizophrenia: a systematic and integrative review

    Directory of Open Access Journals (Sweden)

    Hélio Anderson Tonelli

    2014-04-01

    Full Text Available Introduction: Disorders of thought are psychopathological phenomena commonly present in schizophrenia and seem to result from deficits of semantic processing. Schizotypal personality traits consist of tendencies to think and behave that are qualitatively similar to schizophrenia, with greater vulnerability to such disorder. This study reviewed the literature about semantic processing deficits in samples of individuals with schizotypal traits and discussed the impact of current knowledge upon the comprehension of schizophrenic thought disorders. Studies about the cognitive performance of healthy individuals with schizotypal traits help understand the semantic deficits underlying psychotic thought disorders with the advantage of avoiding confounding factors usually found in samples of individuals with schizophrenia, such as the use of antipsychotics and hospitalizations. Methods: A search for articles published in Portuguese or English within the last 10 years on the databases MEDLINE, Web of Science, PsycInfo, LILACS and Biological Abstracts was conducted, using the keywords semantic processing, schizotypy and schizotypal personality disorder. Results: The search retrieved 44 manuscripts, out of which 11 were firstly chosen. Seven manuscripts were additionally included after reading these papers. Conclusion: The great majority of the included studies showed that schizotypal subjects might exhibit semantic processing deficits. They help clarify about the interfaces between cognitive, neurophysiological and neurochemical mechanisms underlying not only thought disorders, but also healthy human mind's creativity.

  17. TEACHING TURKISH AS SPOKEN IN TURKEY TO TURKIC SPEAKERS - TÜRK DİLLİLERE TÜRKİYE TÜRKÇESİ ÖĞRETİMİ NASIL OLMALIDIR?

    Directory of Open Access Journals (Sweden)

    Ali TAŞTEKİN

    2015-12-01

    associations between the dialects would help Turkic speakers to learn Turkish language more easily; but when neglected, the result would be the opposite. To form the first texts with the common words in Turkic dialects in the books to be written for Teaching Turkish as Spoken in Turkey to Turkic Speakers would facilitate language learning. There is a need for a systematic study in terms of every aspect such as the title, the common alphabet, the books and materials to be produced and teaching methods to be used in Teaching Turkish as Spoken in Turkey to Turkic Speakers. The characteristics of Turkish dialect spoken in Turkey and other Turkic dialects are still not considered in the activities conducted on the subject; and the activity is considered as foreign language teaching instead of teaching another dialect of Turkish language. It is considered that Turkish as spoken in Turkey should be the common communicative language among the Turkish world and for this reason it is crucial to teach it to people who speak other dialects of Turkic languages. First of all, common and similar semantic, vocal and structural units in the dialects must be identified and texts and methods that emphasize these similarities must be determined. We can list the measures to be taken in Teaching Turkish as Spoken in Turkey to Turkic Speakers as such: “1. In order to obtain efficiency in Turkish language teaching, it should be classified as native language teaching, teaching Turkish to bilinguals, teaching Turkish to foreigners and teaching Turkish as spoken in Turkey to Turkic speakers. 2. Teaching Turkish as spoken in Turkey to Turkic speakers should be considered separately from general Turkish language teaching and methods specific to this field should be identified. 3. Instead of methods that have been used for years with no significant results, more authentic method such as creative writing should be used in teaching Turkish as spoken in Turkey to Turkic speakers. 4. In line with the idea

  18. On the accessibility of phonological, orthographic, and semantic aspects of second language vocabulary learning and their relationship with spatial and linguistic intelligences

    Directory of Open Access Journals (Sweden)

    Abbas Ali Zarei

    2015-01-01

    Full Text Available The present study was an attempt to investigate the differences in the accessibility of phonological, semantic, and orthographic aspects of words in L2 vocabulary learning. For this purpose, a sample of 119 Iranian intermediate level EFL students in a private language institute in Karaj was selected. All of the participants received the same instructional treatment. At the end of the experimental period, three tests were administered based on the previously-taught words. A subset of Gardner’s’ (1983 Multiple Intelligences questionnaire was also used for data collection. A repeated measures one-way ANOVA procedure was used to analyze the obtained data. The results showed significant differences in the accessibility of phonological, semantic, and orthographic aspects of words in second language vocabulary learning. Moreover, to investigate the relationships between spatial and linguistic intelligences and the afore-mentioned aspects of lexical knowledge, a correlational analysis was used. No significant relationships were found between spatial and linguistic intelligences and the three aspects of lexical knowledge. These findings may have theoretical and pedagogical implications for researchers, teachers, and learners.

  19. Preexisting semantic representation improves working memory performance in the visuospatial domain.

    Science.gov (United States)

    Rudner, Mary; Orfanidou, Eleni; Cardin, Velia; Capek, Cheryl M; Woll, Bencie; Rönnberg, Jerker

    2016-05-01

    Working memory (WM) for spoken language improves when the to-be-remembered items correspond to preexisting representations in long-term memory. We investigated whether this effect generalizes to the visuospatial domain by administering a visual n-back WM task to deaf signers and hearing signers, as well as to hearing nonsigners. Four different kinds of stimuli were presented: British Sign Language (BSL; familiar to the signers), Swedish Sign Language (SSL; unfamiliar), nonsigns, and nonlinguistic manual actions. The hearing signers performed better with BSL than with SSL, demonstrating a facilitatory effect of preexisting semantic representation. The deaf signers also performed better with BSL than with SSL, but only when WM load was high. No effect of preexisting phonological representation was detected. The deaf signers performed better than the hearing nonsigners with all sign-based materials, but this effect did not generalize to nonlinguistic manual actions. We argue that deaf signers, who are highly reliant on visual information for communication, develop expertise in processing sign-based items, even when those items do not have preexisting semantic or phonological representations. Preexisting semantic representation, however, enhances the quality of the gesture-based representations temporarily maintained in WM by this group, thereby releasing WM resources to deal with increased load. Hearing signers, on the other hand, may make strategic use of their speech-based representations for mnemonic purposes. The overall pattern of results is in line with flexible-resource models of WM.

  20. Semantator: annotating clinical narratives with semantic web ontologies.

    Science.gov (United States)

    Song, Dezhao; Chute, Christopher G; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator.

  1. Semantic organization in children with Cochlear Implants: Computational analysis of verbal fluency

    Directory of Open Access Journals (Sweden)

    Yoed Nissan Kenett

    2013-09-01

    Full Text Available Purpose: Cochlear implants (CIs enable children with severe and profound hearing impairments to perceive the sensation of sound sufficiently to permit oral language acquisition. So far, studies have focused mainly on technological improvements and general outcomes of implantation for speech perception and spoken language development. This study quantitatively explored the semantic networks of children with CIs in comparison to those of age-matched normal hearing (NH peers.Method: Twenty seven children with CIs and twenty seven age- and IQ-matched NH children ages 7-10 were tested on a timed animal verbal fluency task (Name as many animals as you can. The responses were analyzed using correlation and network methodologies. The structure of the animal category semantic networks for both groups were extracted and compared.Results: Children with CIs appeared to have a less-developed semantic lexicon structure compared to age-matched NH peers. The average shortest path length and the network diameter measures were larger for the NH group compared to the CIs group. This difference was consistent for the analysis of networks derived from animal names generated by each group (sample-matched correlation networks and for the networks derived from the common animal names generated by both groups (word-matched correlation networks.Conclusions: The main difference between the semantic networks of children with CIs and NH children lies in the network structure. The semantic network of children with CIs is under-developed compared to the semantic network of the age-matched NH children. We discuss the practical and clinical implications of our findings.

  2. User-Centred Design for Chinese-Oriented Spoken English Learning System

    Science.gov (United States)

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  3. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  4. Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery

    Directory of Open Access Journals (Sweden)

    Andrea Hudáková

    2017-11-01

    Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.

  5. Can monitoring in language comprehension in Autism Spectrum Disorder be modulated? Evidence From P600 to Semantic

    NARCIS (Netherlands)

    Koolen, S.; Vissers, C.T.W.M.; Egger, J.I.M.; Verhoeven, L.T.W.

    2013-01-01

    Objective: Individuals with Autism Spectrum Disorder (ASD) generally show impairments in language comprehension. It is often assumed that these difficulties reflect a linguistic deficit. We propose, however, that language difficulties result from atypical cognitive control processes. Recent

  6. On the Semantics of Focus

    Science.gov (United States)

    Kess, Joseph F.

    1975-01-01

    This article discusses the semantics of the notion of focus, insofar as it relates to Filipino languages. The evolution of this notion is reviewed, and an alternative explanation of it is given, stressing the fact that grammar and semantics should be kept separate in a discussion of focus. (CLK)

  7. Developing a corpus of spoken language variability

    Science.gov (United States)

    Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford

    2003-10-01

    We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.

  8. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-05-16

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150-250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300-500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500-700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements.

  9. Brain correlates of constituent structure in sign language comprehension.

    Science.gov (United States)

    Moreno, Antonio; Limousin, Fanny; Dehaene, Stanislas; Pallier, Christophe

    2018-02-15

    During sentence processing, areas of the left superior temporal sulcus, inferior frontal gyrus and left basal ganglia exhibit a systematic increase in brain activity as a function of constituent size, suggesting their involvement in the computation of syntactic and semantic structures. Here, we asked whether these areas play a universal role in language and therefore contribute to the processing of non-spoken sign language. Congenitally deaf adults who acquired French sign language as a first language and written French as a second language were scanned while watching sequences of signs in which the size of syntactic constituents was manipulated. An effect of constituent size was found in the basal ganglia, including the head of the caudate and the putamen. A smaller effect was also detected in temporal and frontal regions previously shown to be sensitive to constituent size in written language in hearing French subjects (Pallier et al., 2011). When the deaf participants read sentences versus word lists, the same network of language areas was observed. While reading and sign language processing yielded identical effects of linguistic structure in the basal ganglia, the effect of structure was stronger in all cortical language areas for written language relative to sign language. Furthermore, cortical activity was partially modulated by age of acquisition and reading proficiency. Our results stress the important role of the basal ganglia, within the language network, in the representation of the constituent structure of language, regardless of the input modality. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  11. V2 word order in subordinate clauses in spoken Danish

    DEFF Research Database (Denmark)

    Jensen, Torben Juel; Christensen, Tanya Karoli

    are asymmetrically distributed, we argue that the word order difference should rather be seen as a signal of (subtle) semantic differences. In main clauses, V3 is highly marked in comparison to V2, and occurs in what may be called emotives. In subordinate clauses, V2 is marked and signals what has been called...... ”assertiveness”, but is rather a question of foregrounding (cf. Simons 2007: Main Point of Utterance). The paper presents the results of a study of word order in subordinate clauses in contemporary spoken Danish and focuses on how to include the proposed semantic difference as a factor influencing the choice...... studies of two age cohorts of speakers in Copenhagen, recorded in the 1980s and again in 2005-07, and on recent recordings with two age cohorts of speakers from the western part of Jutland. This makes it possible to study variation and change with respect to word order in subordinate clauses in both real...

  12. Defunctionalized Interpreters for Programming Languages

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2008-01-01

    by Reynolds in ``Definitional Interpreters for Higher-Order Programming Languages'' for functional implementations of denotational semantics, natural semantics, and big-step abstract machines using closure conversion, CPS transformation, and defunctionalization. Over the last few years, the author and his...... operational semantics can be expressed as a reduction semantics: for deterministic languages, a reduction semantics is a structural operational semantics in continuation style, where the reduction context is a defunctionalized continuation. As the defunctionalized counterpart of the continuation of a one...

  13. Selective verbal recognition memory impairments are associated with atrophy of the language network in non-semantic variants of primary progressive aphasia.

    Science.gov (United States)

    Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J

    2017-06-01

    Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Semantic Representatives of the Concept

    Directory of Open Access Journals (Sweden)

    Elena N. Tsay

    2013-01-01

    Full Text Available In the article concept as one of the principle notions of cognitive linguistics is investigated. Considering concept as culture phenomenon, having language realization and ethnocultural peculiarities, the description of the concept “happiness” is presented. Lexical and semantic paradigm of the concept of happiness correlates with a great number of lexical and semantic variants. In the work semantic representatives of the concept of happiness, covering supreme spiritual values are revealed and semantic interpretation of their functioning in the Biblical discourse is given.

  15. System semantics of explanatory dictionaries

    Directory of Open Access Journals (Sweden)

    Volodymyr Shyrokov

    2015-11-01

    Full Text Available System semantics of explanatory dictionaries Some semantic properties of the language to be followed from the structure of lexicographical systems of big explanatory dictionaries are considered. The hyperchains and hypercycles are determined as the definite kind of automorphisms of the lexicographical system of explanatory dictionary. Some semantic consequencies following from the principles of lexicographic closure and lexicographic completeness are investigated using the hyperchains and hypercycles formalism. The connection between the hypercyle properties of the lexicographical system semantics and Goedel’s incompleteness theorem is discussed.

  16. Semantic and translation priming from a first language to a second and back: Making sense of the findings

    OpenAIRE

    Schoonbaert, Sofie; Duyck, Wouter; Brysbaert, Marc; Hartsuiker, Robert

    2009-01-01

    The present study investigated cross-language priming effects with unique noncognate translation pairs. Unbalanced Dutch (first language [L1])-English (second language [L2]) bilinguals performed a lexical decision task in a masked priming paradigm. The results of two experiments showed significant translation priming from L I to L2 (meisje-GIRL) and from L2 to L I (girl-MEISJE), using two different stimulus onset asynchronies (SOAs) (250 and 100 msec). Although translation priming from L I to...

  17. Semantic Web Services with Web Ontology Language (OWL-S) - Specification of Agent-Services for DARPA Agent Markup Language (DAML)

    Science.gov (United States)

    2006-08-01

    Sycara, and T. Nishimura, "Towards a Semantic Web Ecommerce ," in Proceedings of 6th Conference on Business Information Systems (BIS2003), Colorado...the ontology used is the fictitious ontology http://fly.com/Onto. The advantage of using concepts from Web-addressable ontologies, rather than XML...the advantage of the OWL-S approach compared with other approaches, namely BPEL4WS and WS-CDL, is that OWL-S allows the flexibility to change the

  18. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  19. Semantic Keys and Reading

    Directory of Open Access Journals (Sweden)

    Zev bar-Lev

    2016-12-01

    Full Text Available Semantic Keys are elements (word-parts of written language that give an iconic, general representation of the whole word’s meaning. In written Sino-Japanese the “radical” or semantic components play this role. For example, the character meaning ‘woman, female’ is the Semantic Key of the character for Ma ‘Mama’ (alongside the phonetic component Ma, which means ‘horse’ as a separate character. The theory of semantic Keys in both graphic and phonemic aspects is called qTheory or nanosemantics. The most innovative aspect of the present article is the hypothesis that, in languages using alphabetic writing systems, the role of Semantic Key is played by consonants, more specifically the first consonant. Thus, L meaning ‘LIFT’ is the Semantic Key of English Lift, Ladle, Lofty, aLps, eLevator, oLympus; Spanish Leva, Lecantarse, aLto, Lengua; Arabic aLLah, and Hebrew① ªeL-ºaL ‘upto-above’ (the Israeli airline, Polish Lot ‘flight’ (the Polish airline; Hebrew ªeL, ªeLohim ‘God’, and haLLeluyah ‘praise-ye God’ (using Parallels, ‘Lift up God’. Evidence for the universality of the theory is shown by many examples drawn from various languages, including Indo-European Semitic, Chinese and Japanese. The theory reveals hundreds of relationships within and between languages, related and unrelated, that have been “Hiding in Plain Sight”, to mention just one example: the Parallel between Spanish Pan ‘bread’ and Mandarin Fan ‘rice’.

  20. Semantic Multimedia

    NARCIS (Netherlands)

    S. Staab; A. Scherp; R. Arndt; R. Troncy (Raphael); M. Grzegorzek; C. Saathoff; S. Schenk; L. Hardman (Lynda)

    2008-01-01

    htmlabstractMultimedia constitutes an interesting field of application for Semantic Web and Semantic Web reasoning, as the access and management of multimedia content and context depends strongly on the semantic descriptions of both. At the same time, multimedia resources constitute complex objects,

  1. Generative Semantics.

    Science.gov (United States)

    King, Margaret

    The first section of this paper deals with the attempts within the framework of transformational grammar to make semantics a systematic part of linguistic description, and outlines the characteristics of the generative semantics position. The second section takes a critical look at generative semantics in its later manifestations, and makes a case…

  2. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  3. Semantic Coherence Facilitates Distributional Learning.

    Science.gov (United States)

    Ouyang, Long; Boroditsky, Lera; Frank, Michael C

    2017-04-01

    Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of association with other words (e.g., they both tend to occur with words like "deliver," "truck," "package"). In contrast to these computational results, artificial language learning experiments suggest that distributional statistics alone do not facilitate learning of linguistic categories. However, experiments in this paradigm expose participants to entirely novel words, whereas real language learners encounter input that contains some known words that are semantically organized. In three experiments, we show that (a) the presence of familiar semantic reference points facilitates distributional learning and (b) this effect crucially depends both on the presence of known words and the adherence of these known words to some semantic organization. Copyright © 2016 Cognitive Science Society, Inc.

  4. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  5. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  6. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Morphological Family Size Effects in Young First and Second Language Learners: Evidence of Cross-Language Semantic Activation in Visual Word Recognition

    Science.gov (United States)

    de Zeeuw, Marlies; Verhoeven, Ludo; Schreuder, Robert

    2012-01-01

    This study examined to what extent young second language (L2) learners showed morphological family size effects in L2 word recognition and whether the effects were grade-level related. Turkish-Dutch bilingual children (L2) and Dutch (first language, L1) children from second, fourth, and sixth grade performed a Dutch lexical decision task on words…

  8. Episodic Memory, Semantic Memory, and Fluency.

    Science.gov (United States)

    Schaefer, Carl F.

    1980-01-01

    Suggests that creating a second-language semantic network can be conceived as developing a plan for retrieving second-language word forms. Characteristics of linguistic performance which will promote fluency are discussed in light of the distinction between episodic and semantic memory. (AMH)

  9. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  10. SEMANTIC DERIVATION OF BORROWINGS

    Directory of Open Access Journals (Sweden)

    Shigapova, F.F.

    2017-09-01

    Full Text Available The author carried out the contrastive analysis of the word спикер borrowed into Russian from English and the English word speaker. The findings of the analysis include confirm (1 different derivational abilities and functions of the borrowed word and the native word; (2 distinctive features in the definitions, i.e. semantic structures, registered in monolingual non-abridged dictionaries; (3 heterogeneous parameters of frequencies recorded in the National Corpus of the Russian language and the British National Corpus; (4 absence of bilingual equivalent collocations with words спикер and speaker. The collocations with words studied revealed new lexical and connotative senses in the meaning of the word. Relevance of the study conducted is justified by the new facts revealed about the semantic adaptation of the borrowed word in the system of the Russian language and its paradigmatic and syntagmatic connections in the system of the recipient language.

  11. Serbian heritage language schools in the Netherlands through the eyes of the parents

    NARCIS (Netherlands)

    Palmen, Andrej

    It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the

  12. The effect of occlusion on the semantics of projective spatial terms: a case study in grounding language in perception.

    Science.gov (United States)

    Kelleher, John D; Ross, Robert J; Sloan, Colm; Mac Namee, Brian

    2011-02-01

    Although data-driven spatial template models provide a practical and cognitively motivated mechanism for characterizing spatial term meaning, the influence of perceptual rather than solely geometric and functional properties has yet to be systematically investigated. In the light of this, in this paper, we investigate the effects of the perceptual phenomenon of object occlusion on the semantics of projective terms. We did this by conducting a study to test whether object occlusion had a noticeable effect on the acceptance values assigned to projective terms with respect to a 2.5-dimensional visual stimulus. Based on the data collected, a regression model was constructed and presented. Subsequent analysis showed that the regression model that included the occlusion factor outperformed an adaptation of Regier & Carlson's well-regarded AVS model for that same spatial configuration.

  13. ODMSummary: A Tool for Automatic Structured Comparison of Multiple Medical Forms Based on Semantic Annotation with the Unified Medical Language System.

    Science.gov (United States)

    Storck, Michael; Krumm, Rainer; Dugas, Martin

    2016-01-01

    Medical documentation is applied in various settings including patient care and clinical research. Since procedures of medical documentation are heterogeneous and developed further, secondary use of medical data is complicated. Development of medical forms, merging of data from different sources and meta-analyses of different data sets are currently a predominantly manual process and therefore difficult and cumbersome. Available applications to automate these processes are limited. In particular, tools to compare multiple documentation forms are missing. The objective of this work is to design, implement and evaluate the new system ODMSummary for comparison of multiple forms with a high number of semantically annotated data elements and a high level of usability. System requirements are the capability to summarize and compare a set of forms, enable to estimate the documentation effort, track changes in different versions of forms and find comparable items in different forms. Forms are provided in Operational Data Model format with semantic annotations from the Unified Medical Language System. 12 medical experts were invited to participate in a 3-phase evaluation of the tool regarding usability. ODMSummary (available at https://odmtoolbox.uni-muenster.de/summary/summary.html) provides a structured overview of multiple forms and their documentation fields. This comparison enables medical experts to assess multiple forms or whole datasets for secondary use. System usability was optimized based on expert feedback. The evaluation demonstrates that feedback from domain experts is needed to identify usability issues. In conclusion, this work shows that automatic comparison of multiple forms is feasible and the results are usable for medical experts.

  14. Decision table languages and systems

    CERN Document Server

    Metzner, John R

    1977-01-01

    ACM Monograph Series: Decision Table Languages and Systems focuses on linguistic examination of decision tables and survey of the features of existing decision table languages and systems. The book first offers information on semiotics, programming language features, and generalization. Discussions focus on semantic broadening, outer language enrichments, generalization of syntax, limitations, implementation improvements, syntactic and semantic features, decision table syntax, semantics of decision table languages, and decision table programming languages. The text then elaborates on design im

  15. Semantic metrics

    OpenAIRE

    Hu, Bo; Kalfoglou, Yannis; Dupplaw, David; Alani, Harith; Lewis, Paul; Shadbolt, Nigel

    2006-01-01

    In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and/or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a...

  16. SELECTION OF ONTOLOGY FOR WEB SERVICE DESCRIPTION LANGUAGE TO ONTOLOGY WEB LANGUAGE CONVERSION

    OpenAIRE

    J. Mannar Mannan; M. Sundarambal; S. Raghul

    2014-01-01

    Semantic web is to extend the current human readable web to encoding some of the semantic of resources in a machine processing form. As a Semantic web component, Semantic Web Services (SWS) uses a mark-up that makes the data into detailed and sophisticated machine readable way. One such language is Ontology Web Language (OWL). Existing conventional web service annotation can be changed to semantic web service by mapping Web Service Description Language (WSDL) with the semantic annotation of O...

  17. Early Childhood Stuttering and Electrophysiological Indices of Language Processing

    Science.gov (United States)

    Weber-Fox, Christine; Wray, Amanda Hampton; Arnold, Hayley

    2013-01-01

    We examined neural activity mediating semantic and syntactic processing in 27 preschool-age children who stutter (CWS) and 27 preschool-age children who do not stutter (CWNS) matched for age, nonverbal IQ and language abilities. All participants displayed language abilities and nonverbal IQ within the normal range. Event-related brain potentials (ERPs) were elicited while participants watched a cartoon video and heard naturally spoken sentences that were either correct or contained semantic or syntactic (phrase structure) violations. ERPs in CWS, compared to CWNS, were characterized by longer N400 peak latencies elicited by semantic processing. In the CWS, syntactic violations elicited greater negative amplitudes for the early time window (150–350 ms) over medial sites compared to CWNS. Additionally, the amplitude of the P600 elicited by syntactic violations relative to control words was significant over the left hemisphere for the CWNS but showed the reverse pattern in CWS, a robust effect only over the right hemisphere. Both groups of preschoolage children demonstrated marked and differential effects for neural processes elicited by semantic and phrase structure violations; however, a significant proportion of young CWS exhibit differences in the neural functions mediating language processing compared to CWNS despite normal language abilities. These results are the first to show that differences in event-related brain potentials reflecting language processing occur as early as the preschool years in CWS and provide the first evidence that atypical lateralization of hemispheric speech/language functions previously observed in the brains of adults who stutter begin to emerge near the onset of developmental stuttering. PMID:23773672

  18. Language and ToM Development in Autism versus Asperger Syndrome: Contrasting Influences of Syntactic versus Lexical/Semantic Maturity

    Science.gov (United States)

    Paynter, Jessica; Peterson, Candida

    2010-01-01

    Theory of mind (ToM) development by a sample of 63 children aged 5-12 years (24 with Asperger syndrome, 19 with high-functioning autism, and 20 age-matched typical developers) was assessed with a five-task false-belief battery in relation to both lexical (vocabulary) and syntactic (grammar) language skills. Contrary to some previous research, no…

  19. On the Parallel Deterioration of Lexico-Semantic Processes in the Bilinguals' Two Languages: Evidence from Alzheimer's Disease

    Science.gov (United States)

    Costa, Albert; Calabria, Marco; Marne, Paula; Hernandez, Mireia; Juncadella, Montserrat; Gascon-Bayarri, Jordi; Lleo, Alberto; Ortiz-Gil, Jordi; Ugas, Lidia; Blesa, Rafael; Rene, Ramon

    2012-01-01

    In this article we aimed to assess how Alzheimer's disease (AD), which is neurodegenerative, affects the linguistic performance of early, high-proficient bilinguals in their two languages. To this end, we compared the Picture Naming and Word Translation performances of two groups of AD patients varying in disease progression (Mild and Moderate)…

  20. Semantic content-based recommendations using semantic graphs.

    Science.gov (United States)

    Guo, Weisen; Kraines, Steven B

    2010-01-01

    Recommender systems (RSs) can be useful for suggesting items that might be of interest to specific users. Most existing content-based recommendation (CBR) systems are designed to recommend items based on text content, and the items in these systems are usually described with keywords. However, similarity evaluations based on keywords suffer from the ambiguity of natural languages. We present a semantic CBR method that uses Semantic Web technologies to recommend items that are more similar semantically with the items that the user prefers. We use semantic graphs to represent the items and we calculate the similarity scores for each pair of semantic graphs using an inverse graph frequency algorithm. The items having higher similarity scores to the items that are known to be preferred by the user are recommended.

  1. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  2. Action representation: crosstalk between semantics and pragmatics.

    Science.gov (United States)

    Prinz, Wolfgang

    2014-03-01

    Marc Jeannerod pioneered a representational approach to movement and action. In his approach, motor representations provide both, declarative knowledge about action and procedural knowledge for action (action semantics and action pragmatics, respectively). Recent evidence from language comprehension and action simulation supports the claim that action pragmatics and action semantics draw on common representational resources, thus challenging the traditional divide between declarative and procedural action knowledge. To account for these observations, three kinds of theoretical frameworks are discussed: (i) semantics is grounded in pragmatics, (ii) pragmatics is anchored in semantics, and (iii) pragmatics is part and parcel of semantics. © 2013 Elsevier Ltd. All rights reserved.

  3. Higher-order semantic structures in an African Grey parrot's vocalizations: evidence from the hyperspace analog to language (HAL) model.

    Science.gov (United States)

    Kaufman, Allison B; Colbert-White, Erin N; Burgess, Curt

    2013-09-01

    Previous research has described the significant role that social interaction plays in both the acquisition and use of speech by parrots. The current study analyzed the speech of one home-raised African Grey parrot (Psittacus erithacus erithacus) across three different social contexts: owner interacting with parrot in the same room, owner and parrot interacting out of view in adjacent rooms, and parrot home alone. The purpose was to determine the extent to which the subject's speech reflected an understanding of the contextual substitutability (e.g., the word street can be substituted in context for the word road) of the vocalizations that comprised the units in her repertoire (i.e., global co-occurrence of repertoire units; Burgess in Behav Res Methods Instrum Comput 30:188-198, 1998; Lund and Burgess in Behav Res Methods Instrum Comput 28:203-208, 1996). This was accomplished via the human language model hyperspace analog to language (HAL). HAL is contextually driven and bootstraps language "rules" from input without human intervention. Because HAL does not require human tutelage, it provided an objective measure to empirically examine the parrot's vocalizations. Results indicated that the subject's vocalization patterns did contain global co-occurrence. The presence of this quality in this nonhuman's speech may be strongly indicative of higher-order cognitive skills.

  4. The semantics of prosody: acoustic and perceptual evidence of prosodic correlates to word meaning.

    Science.gov (United States)

    Nygaard, Lynne C; Herold, Debora S; Namy, Laura L

    2009-01-01

    This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language. Copyright © 2009 Cognitive Science Society, Inc.

  5. The time course of morphological processing during spoken word recognition in Chinese.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  6. Polish Semantic Parser

    Directory of Open Access Journals (Sweden)

    Agnieszka Grudzinska

    2000-01-01

    Full Text Available Amount of information transferred by computers grows very rapidly thus outgrowing the average man's capability of reception. It implies computer programs increase in the demand for which would be able to perform an introductory classitication or even selection of information directed to a particular receiver. Due to the complexity of the problem, we restricted it to understanding short newspaper notes. Among many conceptions formulated so far, the conceptual dependency worked out by Roger Schank has been chosen. It is a formal language of description of the semantics of pronouncement integrated with a text understanding algorithm. Substantial part of each text transformation system is a semantic parser of the Polish language. It is a module, which as the first and the only one has an access to the text in the Polish language. lt plays the role of an element, which finds relations between words of the Polish language and the formal registration. It translates sentences written in the language used by people into the language theory. The presented structure of knowledge units and the shape of understanding process algorithms are universal by virtue of the theory. On the other hand the defined knowledge units and the rules used in the algorithms ure only examples because they are constructed in order to understand short newspaper notes.

  7. Are Some Semantic Changes Predictable?

    DEFF Research Database (Denmark)

    Schousboe, Steen

    2010-01-01

      Historical linguistics is traditionally concerned with phonology and syntax. With the exception of grammaticalization - the development of auxiliary verbs, the syntactic rather than localistic use of prepositions, etc. - semantic change has usually not been described as a result of regular...... developments, but only as specific meaning changes in individual words. This paper will suggest some regularities in semantic change, regularities which, like sound laws, have predictive power and can be tested against recorded languages....

  8. The contribution of phonological knowledge, memory, and language background to reading comprehension in deaf populations

    Science.gov (United States)

    Hirshorn, Elizabeth A.; Dye, Matthew W. G.; Hauser, Peter; Supalla, Ted R.; Bavelier, Daphne

    2015-01-01

    While reading is challenging for many deaf individuals, some become proficient readers. Little is known about the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English,’ in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as memory measures that rely differentially on phonological (serial recall) and semantic (free recall) processing, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with free recall being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers. Highlights: 1. Deaf individuals vary in their orthographic and phonological knowledge of English as a function of their language experience. 2. Reading comprehension was best predicted by different factors in oral deaf and

  9. Exploring the learnability and usability of a near field communication-based application for semantic enrichment in children with language disorders.

    Science.gov (United States)

    Lorusso, Maria Luisa; Biffi, Emilia; Molteni, Massimo; Reni, Gianluigi

    2018-01-01

    Recently, a few software applications (apps) have been developed to enhance vocabulary and conceptual networks to address the needs of children with language impairments (LI), but there is no evidence about their impact and their usability in therapy contexts. Here, we try to fill this gap presenting a system aimed at improving the semantic competence and the structural knowledge of children with LI. The goal of the study is to evaluate learnability, usability, user satisfaction and quality of the interaction between the system and the children. The system consists of a tablet, hosting an app with educational and training purposes, equipped with a Near Field Communication (NFC) reader, used to interact with the user by means of objects. Fourteen preschool children with LI played with the device during one 45-minute speech therapy session. Reactions and feedbacks were recorded and rated. The system proved to be easy to understand and learn, as well as engaging and rewarding. The success of the device probably rests on the integration of smart technology and real, tangible objects. The device can be seen as a valuable aid to support and enhance communication abilities in children with LI as well as typically developing individuals.

  10. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    Science.gov (United States)

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980

  11. Engineering Object-Oriented Semantics Using Graph Transformations

    NARCIS (Netherlands)

    Kastenberg, H.; Kleppe, A.G.; Rensink, Arend

    In this paper we describe the application of the theory of graph transformations to the practise of language design. We have defined the semantics of a small but realistic object-oriented language (called TAAL) by mapping the language constructs to graphs and their operational semantics to graph

  12. Verbal and non-verbal semantic impairment: From fluent primary progressive aphasia to semantic dementia

    Directory of Open Access Journals (Sweden)

    Mirna Lie Hosogi Senaha

    Full Text Available Abstract Selective disturbances of semantic memory have attracted the interest of many investigators and the question of the existence of single or multiple semantic systems remains a very controversial theme in the literature. Objectives: To discuss the question of multiple semantic systems based on a longitudinal study of a patient who presented semantic dementia from fluent primary progressive aphasia. Methods: A 66 year-old woman with selective impairment of semantic memory was examined on two occasions, undergoing neuropsychological and language evaluations, the results of which were compared to those of three paired control individuals. Results: In the first evaluation, physical examination was normal and the score on the Mini-Mental State Examination was 26. Language evaluation revealed fluent speech, anomia, disturbance in word comprehension, preservation of the syntactic and phonological aspects of the language, besides surface dyslexia and dysgraphia. Autobiographical and episodic memories were relatively preserved. In semantic memory tests, the following dissociation was found: disturbance of verbal semantic memory with preservation of non-verbal semantic memory. Magnetic resonance of the brain revealed marked atrophy of the left anterior temporal lobe. After 14 months, the difficulties in verbal semantic memory had become more severe and the semantic disturbance, limited initially to the linguistic sphere, had worsened to involve non-verbal domains. Conclusions: Given the dissociation found in the first examination, we believe there is sufficient clinical evidence to refute the existence of a unitary semantic system.

  13. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  14. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Science.gov (United States)

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633

  15. The Role of Simple Semantics in the Process of Artificial Grammar Learning

    Science.gov (United States)

    Öttl, Birgit; Jäger, Gerhard; Kaup, Barbara

    2017-01-01

    This study investigated the effect of semantic information on artificial grammar learning (AGL). Recursive grammars of different complexity levels (regular language, mirror language, copy language) were investigated in a series of AGL experiments. In the with-semantics condition, participants acquired semantic information prior to the AGL…

  16. Spatial Language Learning

    Science.gov (United States)

    Fu, Zhengling

    2016-01-01

    Spatial language constitutes part of the basic fabric of language. Although languages may have the same number of terms to cover a set of spatial relations, they do not always do so in the same way. Spatial languages differ across languages quite radically, thus providing a real semantic challenge for second language learners. The essay first…

  17. Digital Language Death

    Science.gov (United States)

    Kornai, András

    2013-01-01

    Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559

  18. Digital language death.

    Directory of Open Access Journals (Sweden)

    András Kornai

    Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.

  19. A Denotational Semantics for Communicating Unstructured Code

    Directory of Open Access Journals (Sweden)

    Nils Jähnig

    2015-03-01

    Full Text Available An important property of programming language semantics is that they should be compositional. However, unstructured low-level code contains goto-like commands making it hard to define a semantics that is compositional. In this paper, we follow the ideas of Saabas and Uustalu to structure low-level code. This gives us the possibility to define a compositional denotational semantics based on least fixed points to allow for the use of inductive verification methods. We capture the semantics of communication using finite traces similar to the denotations of CSP. In addition, we examine properties of this semantics and give an example that demonstrates reasoning about communication and jumps. With this semantics, we lay the foundations for a proof calculus that captures both, the semantics of unstructured low-level code and communication.

  20. An Algebraic Specification of the Semantic Web

    OpenAIRE

    Ksystra, Katerina; Triantafyllou, Nikolaos; Stefaneas, Petros; Frangos, Panayiotis

    2011-01-01

    We present a formal specification of the Semantic Web, as an extension of the World Wide Web using the well known algebraic specification language CafeOBJ. Our approach allows the description of the key elements of the Semantic Web technologies, in order to give a better understanding of the system, without getting involved with their implementation details that might not yet be standardized. This specification is part of our work in progress concerning the modeling the Social Semantic Web.

  1. Semantic interpretation of search engine resultant

    Science.gov (United States)

    Nasution, M. K. M.

    2018-01-01

    In semantic, logical language can be interpreted in various forms, but the certainty of meaning is included in the uncertainty, which directly always influences the role of technology. One results of this uncertainty applies to search engines as user interfaces with information spaces such as the Web. Therefore, the behaviour of search engine results should be interpreted with certainty through semantic formulation as interpretation. Behaviour formulation shows there are various interpretations that can be done semantically either temporary, inclusion, or repeat.

  2. Semantic Desktop

    Science.gov (United States)

    Sauermann, Leo; Kiesel, Malte; Schumacher, Kinga; Bernardi, Ansgar

    In diesem Beitrag wird gezeigt, wie der Arbeitsplatz der Zukunft aussehen könnte und wo das Semantic Web neue Möglichkeiten eröffnet. Dazu werden Ansätze aus dem Bereich Semantic Web, Knowledge Representation, Desktop-Anwendungen und Visualisierung vorgestellt, die es uns ermöglichen, die bestehenden Daten eines Benutzers neu zu interpretieren und zu verwenden. Dabei bringt die Kombination von Semantic Web und Desktop Computern besondere Vorteile - ein Paradigma, das unter dem Titel Semantic Desktop bekannt ist. Die beschriebenen Möglichkeiten der Applikationsintegration sind aber nicht auf den Desktop beschränkt, sondern können genauso in Web-Anwendungen Verwendung finden.

  3. The role of grammatical category information in spoken word retrieval.

    Science.gov (United States)

    Duràn, Carolina Palma; Pillon, Agnesa

    2011-01-01

    We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production.

  4. Language Development in Children with Language Disorders: An Introduction to Skinner's Verbal Behavior and the Techniques for Initial Language Acquisition

    Science.gov (United States)

    Casey, Laura Baylot; Bicard, David F.

    2009-01-01

    Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…

  5. Multiclausal Utterances Aren't Just for Big Kids: A Framework for Analysis of Complex Syntax Production in Spoken Language of Preschool- and Early School-Age Children

    Science.gov (United States)

    Arndt, Karen Barako; Schuele, C. Melanie

    2013-01-01

    Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…

  6. Atypical right hemisphere specialization for object representations in an adolescent with specific language impairment

    Directory of Open Access Journals (Sweden)

    Timothy T. Brown

    2014-02-01

    Full Text Available Individuals with a diagnosis of specific language impairment (SLI show abnormal spoken language occurring alongside normal nonverbal abilities. Behaviorally, people with SLI exhibit diverse profiles of impairment involving phonological, grammatical, syntactic, and semantic aspects of language. In this study, we used a multimodal neuroimaging technique called anatomically constrained magnetoencephalography (aMEG to measure the dynamic functional brain organization of an adolescent with SLI. Using single-subject statistical maps of cortical activity, we compared this patient to a sibling and to a cohort of typically developing subjects during the performance of tasks designed to evoke semantic representations of concrete objects. Localized, real-time patterns of brain activity within the language impaired patient showed marked differences from the typical functional organization, with significant engagement of right hemisphere heteromodal cortical regions generally homotopic to the left hemisphere areas that usually show the greatest activity for such tasks. Functional neuroanatomical differences were evident at early sensoriperceptual processing stages and continued through later cognitive stages, observed specifically at latencies typically associated with semantic encoding operations. Our findings show with real-time temporal specificity evidence for an atypical right hemisphere specialization for the representation of concrete entities, independent of verbal motor demands. More broadly, our results demonstrate the feasibility and potential utility of using aMEG to characterize individual patient differences in the dynamic functional organization of the brain.

  7. Designing equivalent semantic models for process creation

    NARCIS (Netherlands)

    P.H.M. America (Pierre); J.W. de Bakker (Jaco)

    1986-01-01

    textabstractOperational and denotational semantic models are designed for languages with process creation, and the relationships between the two semantics are investigated. The presentation is organized in four sections dealing with a uniform and static, a uniform and dynamic, a nonuniform and

  8. Semantic Convergence in the Bilingual Lexicon

    Science.gov (United States)

    Ameel, Eef; Malt, Barbara C.; Storms, Gert; Van Assche, Fons

    2009-01-01

    Bilinguals' lexical mappings for their two languages have been found to converge toward a common naming pattern. The present paper investigates in more detail how semantic convergence is manifested in bilingual lexical knowledge. We examined how semantic convergence affects the centers and boundaries of lexical categories for common household…

  9. UML Semantics FAQ: Dynamic Behaviour and Concurrency

    NARCIS (Netherlands)

    Wieringa, Roelf J.; Demeyer, Serge; Astesiano, Egidio; Reggio, Gianna; Le Guennec, Alain; Hussman, Heinrich; van den Berg, Klaas; van den Broek, P.M.

    This paper reports the results of a workshop held at ECOOP'99. The workshop was set up to find answers to questions fundamental to the definition of a semantics for the Unified Modelling Language. Questions examined the meaning of the term semantics in the context of UML; approaches to defining the

  10. Ontological semantics in modified categorial grammar

    DEFF Research Database (Denmark)

    Szymczak, Bartlomiej Antoni

    2009-01-01

    Categorial Grammar is a well established tool for describing natural language semantics. In the current paper we discuss some of its drawbacks and how it could be extended to overcome them. We use the extended version for deriving ontological semantics from text. A proof-of-concept implementation...

  11. Program verification using symbolic game semantics

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar

    2014-01-01

    , especially on its second-order recursion-free fragment with infinite data types. We revisit the regular-language representation of game semantics of this language fragment. By using symbolic values instead of concrete ones, we generalize the standard notions of regular-language and automata representations...

  12. Behavioral and fMRI Evidence that Cognitive Ability Modulates the Effect of Semantic Context on Speech Intelligibility

    Science.gov (United States)

    Zekveld, Adriana A.; Rudner, Mary; Johnsrude, Ingrid S.; Heslenfeld, Dirk J.; Ronnberg, Jerker

    2012-01-01

    Text cues facilitate the perception of spoken sentences to which they are semantically related (Zekveld, Rudner, et al., 2011). In this study, semantically related and unrelated cues preceding sentences evoked more activation in middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) than nonword cues, regardless of acoustic quality (speech…

  13. Word level language identification in online multilingual communication

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Dogruoz, A. Seza

    2013-01-01

    Multilingual speakers switch between languages in online and spoken communication. Analyses of large scale multilingual data require automatic language identification at the word level. For our experiments with multilingual online discussions, we first tag the language of individual words using

  14. Semantics and pragmatics.

    Science.gov (United States)

    McNally, Louise

    2013-05-01

    The fields of semantics and pragmatics are devoted to the study of conventionalized and context- or use-dependent aspects of natural language meaning, respectively. The complexity of human language as a semiotic system has led to considerable debate about how the semantics/pragmatics distinction should be drawn, if at all. This debate largely reflects contrasting views of meaning as a property of linguistic expressions versus something that speakers do. The fact that both views of meaning are essential to a complete understanding of language has led to a variety of efforts over the last 40 years to develop better integrated and more comprehensive theories of language use and interpretation. The most important advances have included the adaptation of propositional analyses of declarative sentences to interrogative, imperative and exclamative forms; the emergence of dynamic, game theoretic, and multi-dimensional theories of meaning; and the development of various techniques for incorporating context-dependent aspects of content into representations of context-invariant content with the goal of handling phenomena such as vagueness resolution, metaphor, and metonymy. WIREs Cogn Sci 2013, 4:285-297. doi: 10.1002/wcs.1227 For further resources related to this article, please visit the WIREs website. The authors declare no conflict of interest. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Monitoring the Performance of Human and Automated Scores for Spoken Responses

    Science.gov (United States)

    Wang, Zhen; Zechner, Klaus; Sun, Yu

    2018-01-01

    As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…

  16. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  17. Between Syntax and Pragmatics: The Causal Conjunction Protože in Spoken and Written Czech

    Czech Academy of Sciences Publication Activity Database

    Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra

    -, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf

  18. School-aged children can benefit from audiovisual semantic congruency during memory encoding.

    Science.gov (United States)

    Heikkilä, Jenni; Tiippana, Kaisa

    2016-05-01

    Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.

  19. Affective Congruence between Sound and Meaning of Words Facilitates Semantic Decision.

    Science.gov (United States)

    Aryani, Arash; Jacobs, Arthur M

    2018-05-31

    A similarity between the form and meaning of a word (i.e., iconicity) may help language users to more readily access its meaning through direct form-meaning mapping. Previous work has supported this view by providing empirical evidence for this facilitatory effect in sign language, as well as for onomatopoetic words (e.g., cuckoo) and ideophones (e.g., zigzag). Thus, it remains largely unknown whether the beneficial role of iconicity in making semantic decisions can be considered a general feature in spoken language applying also to "ordinary" words in the lexicon. By capitalizing on the affective domain, and in particular arousal, we organized words in two distinctive groups of iconic vs. non-iconic based on the congruence vs. incongruence of their lexical (meaning) and sublexical (sound) arousal. In a two-alternative forced choice task, we asked participants to evaluate the arousal of printed words that were lexically either high or low arousing. In line with our hypothesis, iconic words were evaluated more quickly and more accurately than their non-iconic counterparts. These results indicate a processing advantage for iconic words, suggesting that language users are sensitive to sound-meaning mappings even when words are presented visually and read silently.

  20. Visual Sonority Modulates Infants' Attraction to Sign Language

    Science.gov (United States)

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  1. Bimodal Bilingual Language Development of Hearing Children of Deaf Parents

    Science.gov (United States)

    Hofmann, Kristin; Chilla, Solveig

    2015-01-01

    Adopting a bimodal bilingual language acquisition model, this qualitative case study is the first in Germany to investigate the spoken and sign language development of hearing children of deaf adults (codas). The spoken language competence of six codas within the age range of 3;10 to 6;4 is assessed by a series of standardised tests (SETK 3-5,…

  2. The language of human law in the thought of Francisco Suárez

    Directory of Open Access Journals (Sweden)

    Fernando Centenera Sánchez-Seco

    2018-05-01

    Full Text Available The subject of this article is the language of human law in the thought of Francisco Suárez. Its chief focus is on the Treatise on Laws and on God the Lawgiver and its views on the prescriptive nature of legislative language, written and spoken language, the lexical-semantic level, and linguistic clarity from the viewpoints of convenience, the essence of the law and justice. The issues Suárez deals with in relation to these points have continued to attract attention up to the present day, and a reading of the Treatise confirms the impression that some of them are still valid. Accordingly, as well as setting out, describing and offering a guide to understanding Suárez ideas, the article offers a comparative and contemplative analysis of them, without forgetting that their author belonged to the early modern period.

  3. 125 The Fading Phase of Igbo Language and Culture: Path to its ...

    African Journals Online (AJOL)

    Tracie1

    favour of foreign language (and culture). They also ... native language, and children are unable to learn a language not spoken ... shielding them off their mother tongue”. ..... the effect endangered language has on the existence of the owners.

  4. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.

    2014-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  5. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.

    2015-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  6. Towards a Reactive Semantic Execution Environment

    Science.gov (United States)

    Komazec, Srdjan; Facca, Federico Michele

    Managing complex and distributed software systems built on top of the service-oriented paradigm has never been more challenging. While Semantic Web Service technologies offer a promising set of languages and tools as a foundation to resolve the heterogeneity and scalability issues, they are still failing to provide an autonomic execution environment. In this paper we present an approach based on Semantic Web Services to enable the monitoring and self-management of a Semantic Execution Environment (SEE), a brokerage system for Semantic Web Services. Our approach is founded on the event-triggered reactivity paradigm in order to facilitate environment control, thus contributing to its autonomicity, robustness and flexibility.

  7. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    Science.gov (United States)

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  8. Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity

    Science.gov (United States)

    Wong, Miranda Kit-Yi; So, Wing Chee

    2016-01-01

    This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…

  9. Language

    DEFF Research Database (Denmark)

    Sanden, Guro Refsum

    2016-01-01

    Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...... of a company. Language policies and/or strategies can be used to regulate a company’s internal modes of communication. Language management tools can be deployed to address existing and expected language needs. Continuous feedback from the front line ensures strategic learning and reduces the risk of suboptimal...

  10. Basic speech recognition for spoken dialogues

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-09-01

    Full Text Available Spoken dialogue systems (SDSs) have great potential for information access in the developing world. However, the realisation of that potential requires the solution of several challenging problems, including the development of sufficiently accurate...

  11. The gender congruency effect during bilingual spoken-word recognition

    Science.gov (United States)

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  12. The Role of Simple Semantics in the Process of Artificial Grammar Learning.

    Science.gov (United States)

    Öttl, Birgit; Jäger, Gerhard; Kaup, Barbara

    2017-10-01

    This study investigated the effect of semantic information on artificial grammar learning (AGL). Recursive grammars of different complexity levels (regular language, mirror language, copy language) were investigated in a series of AGL experiments. In the with-semantics condition, participants acquired semantic information prior to the AGL experiment; in the without-semantics control condition, participants did not receive semantic information. It was hypothesized that semantics would generally facilitate grammar acquisition and that the learning benefit in the with-semantics conditions would increase with increasing grammar complexity. Experiment 1 showed learning effects for all grammars but no performance difference between conditions. Experiment 2 replicated the absence of a semantic benefit for all grammars even though semantic information was more prominent during grammar acquisition as compared to Experiment 1. Thus, we did not find evidence for the idea that semantics facilitates grammar acquisition, which seems to support the view of an independent syntactic processing component.

  13. Assessment of lexical semantic judgment abilities in alcohol ...

    Indian Academy of Sciences (India)

    2013-11-06

    Nov 6, 2013 ... Keywords. Alcoholism; brain; fMRI; language processing; lexical; semantic judgment .... (English for all subjects) and hours spent reading one/both languages. ..... and alcoholism on verbal and visuospatial learning. J. Nerv.

  14. Semantic and syntactic reading comprehension strategies used by deaf children with early and late cochlear implantation.

    Science.gov (United States)

    Gallego, Carlos; Martín-Aragoneses, M Teresa; López-Higes, Ramón; Pisón, Guzmán

    2016-01-01

    Deaf students have traditionally exhibited reading comprehension difficulties. In recent years, these comprehension problems have been partially offset through cochlear implantation (CI), and the subsequent improvement in spoken language skills. However, the use of cochlear implants has not managed to fully bridge the gap in language and reading between normally hearing (NH) and deaf children, as its efficacy depends on variables such as the age at implant. This study compared the reading comprehension of sentences in 19 children who received a cochlear implant before 24 months of age (early-CI) and 19 who received it after 24 months (late-CI) with a control group of 19 NH children. The task involved completing sentences in which the last word had been omitted. To complete each sentence children had to choose a word from among several alternatives that included one syntactic and two semantic foils in addition to the target word. The results showed that deaf children with late-CI performed this task significantly worse than NH children, while those with early-CI exhibited no significant differences with NH children, except under more demanding processing conditions (long sentences with infrequent target words). Further, the error analysis revealed a preference of deaf students with early-CI for selecting the syntactic foil over a semantic one, which suggests that they draw upon syntactic cues during sentence processing in the same way as NH children do. In contrast, deaf children with late-CI do not appear to use a syntactic strategy, but neither a semantic strategy based on the use of key words, as the literature suggests. Rather, the numerous errors of both kinds that the late-CI group made seem to indicate an inconsistent and erratic response when faced with a lack of comprehension. These findings are discussed in relation to differences in receptive vocabulary and short-term memory and their implications for sentence reading comprehension. Copyright © 2015

  15. Towards the multilingual semantic web principles, methods and applications

    CERN Document Server

    Buitelaar, Paul

    2014-01-01

    To date, the relation between multilingualism and the Semantic Web has not yet received enough attention in the research community. One major challenge for the Semantic Web community is to develop architectures, frameworks and systems that can help in overcoming national and language barriers, facilitating equal access to information produced in different cultures and languages. As such, this volume aims at documenting the state-of-the-art with regard to the vision of a Multilingual Semantic Web, in which semantic information will be accessible in and across multiple languages. The Multiling

  16. Generative Semantics

    Science.gov (United States)

    Bagha, Karim Nazari

    2011-01-01

    Generative semantics is (or perhaps was) a research program within linguistics, initiated by the work of George Lakoff, John R. Ross, Paul Postal and later McCawley. The approach developed out of transformational generative grammar in the mid 1960s, but stood largely in opposition to work by Noam Chomsky and his students. The nature and genesis of…

  17. Inferentializing Semantics

    Czech Academy of Sciences Publication Activity Database

    Peregrin, Jaroslav

    2010-01-01

    Roč. 39, č. 3 (2010), s. 255-274 ISSN 0022-3611 R&D Projects: GA ČR(CZ) GA401/07/0904 Institutional research plan: CEZ:AV0Z90090514 Keywords : inference * proof theory * model theory * inferentialism * semantics Subject RIV: AA - Philosophy ; Religion

  18. Guest Comment: Universal Language Requirement.

    Science.gov (United States)

    Sherwood, Bruce Arne

    1979-01-01

    Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)

  19. Semantics, contrastive linguistics and parallel corpora

    Directory of Open Access Journals (Sweden)

    Violetta Koseska

    2014-09-01

    Full Text Available Semantics, contrastive linguistics and parallel corpora In view of the ambiguity of the term “semantics”, the author shows the differences between the traditional lexical semantics and the contemporary semantics in the light of various semantic schools. She examines semantics differently in connection with contrastive studies where the description must necessary go from the meaning towards the linguistic form, whereas in traditional contrastive studies the description proceeded from the form towards the meaning. This requirement regarding theoretical contrastive studies necessitates construction of a semantic interlanguage, rather than only singling out universal semantic categories expressed with various language means. Such studies can be strongly supported by parallel corpora. However, in order to make them useful for linguists in manual and computer translations, as well as in the development of dictionaries, including online ones, we need not only formal, often automatic, annotation of texts, but also semantic annotation - which is unfortunately manual. In the article we focus on semantic annotation concerning time, aspect and quantification of names and predicates in the whole semantic structure of the sentence on the example of the “Polish-Bulgarian-Russian parallel corpus”.

  20. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. A Framework for Automatic Web Service Discovery Based on Semantics and NLP Techniques

    Directory of Open Access Journals (Sweden)

    Asma Adala

    2011-01-01

    Full Text Available As a greater number of Web Services are made available today, automatic discovery is recognized as an important task. To promote the automation of service discovery, different semantic languages have been created that allow describing the functionality of services in a machine interpretable form using Semantic Web technologies. The problem is that users do not have intimate knowledge about semantic Web service languages and related toolkits. In this paper, we propose a discovery framework that enables semantic Web service discovery based on keywords written in natural language. We describe a novel approach for automatic discovery of semantic Web services which employs Natural Language Processing techniques to match a user request, expressed in natural language, with a semantic Web service description. Additionally, we present an efficient semantic matching technique to compute the semantic distance between ontological concepts.

  2. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  3. Explaining Semantic Short-Term Memory Deficits: Evidence for the Critical Role of Semantic Control

    Science.gov (United States)

    Hoffman, Paul; Jefferies, Elizabeth; Lambon Ralph, Matthew A.

    2011-01-01

    Patients with apparently selective short-term memory (STM) deficits for semantic information have played an important role in developing multi-store theories of STM and challenge the idea that verbal STM is supported by maintaining activation in the language system. We propose that semantic STM deficits are not as selective as previously thought…

  4. Semantic Blogging : Spreading the Semantic Web Meme

    OpenAIRE

    Cayzer, Steve

    2004-01-01

    This paper is about semantic blogging, an application of the semantic web to blogging. The semantic web promises to make the web more useful by endowing metadata with machine processable semantics. Blogging is a lightweight web publishing paradigm which provides a very low barrier to entry, useful syndication and aggregation behaviour, a simple to understand structure and decentralized construction of a rich information network. Semantic blogging builds upon the success and clear network valu...

  5. The Fluctuating Development of Cross-Linguistic Semantic Awareness: A Longitudinal Multiple-Case Study

    Science.gov (United States)

    Zheng, Yongyan

    2014-01-01

    Second language (L2) learners' awareness of first language-second language (L1-L2) semantic differences plays a critical role in L2 vocabulary learning. This study investigates the long-term development of eight university-level Chinese English as a foreign language learners' cross-linguistic semantic awareness over the course of 10 months. A…

  6. Use of spoken and written Japanese did not protect Japanese-American men from cognitive decline in late life.

    Science.gov (United States)

    Crane, Paul K; Gruhl, Jonathan C; Erosheva, Elena A; Gibbons, Laura E; McCurry, Susan M; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-11-01

    Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900-1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve.

  7. Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/

    Directory of Open Access Journals (Sweden)

    Socorro Cláudia Tavares de Sousa

    2009-01-01

    Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.

  8. From Etymology to Pragmatics: Metaphorical and Cultural Aspects of Semantic Structure by Eve Sweetser. From Etymology to Pragmatics: Metaphorical and Cultural Aspects of Semantic Structure by Eve Sweetser.

    Directory of Open Access Journals (Sweden)

    Robson de Souza Bittencourt

    2008-04-01

    Full Text Available One might ask whether there is any other way to define cognitive semantics than by its opposition to truth-conditional semantics, or any variant of it for that matter. Indeed, objectivistic philosophy has provided the background to cognitive semantics, which in turn has raised serious questions about its reliability to deal with natural language. One might ask whether there is any other way to define cognitive semantics than by its opposition to truth-conditional semantics, or any variant of it for that matter. Indeed, objectivistic philosophy has provided the background to cognitive semantics, which in turn has raised serious questions about its reliability to deal with natural language.

  9. Listening to accented speech in a second language: First language and age of acquisition effects.

    Science.gov (United States)

    Larraza, Saioa; Samuel, Arthur G; Oñederra, Miren Lourdes

    2016-11-01

    Bilingual speakers must acquire the phonemic inventory of 2 languages and need to recognize spoken words cross-linguistically; a demanding job potentially made even more difficult due to dialectal variation, an intrinsic property of speech. The present work examines how bilinguals perceive second language (L2) accented speech and where accommodation to dialectal variation takes place. Dialectal effects were analyzed at different levels: An AXB discrimination task tapped phonetic-phonological representations, an auditory lexical-decision task tested for effects in accessing the lexicon, and an auditory priming task looked for semantic processing effects. Within that central focus, the goal was to see whether perceptual adjustment at a given level is affected by 2 main linguistic factors: bilinguals' first language and age of acquisition of the L2. Taking advantage of the cross-linguistic situation of the Basque language, bilinguals with different first languages (Spanish or French) and ages of acquisition of Basque (simultaneous, early, or late) were tested. Our use of multiple tasks with multiple types of bilinguals demonstrates that in spite of very similar discrimination capacity, French-Basque versus Spanish-Basque simultaneous bilinguals' performance on lexical access significantly differed. Similarly, results of the early and late groups show that the mapping of phonetic-phonological information onto lexical representations is a more demanding process that accentuates non-native processing difficulties. L1 and AoA effects were more readily overcome in semantic processing; accented variants regularly created priming effects in the different groups of bilinguals. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Implications of Hegel's Theories of Language on Second Language Teaching

    Science.gov (United States)

    Wu, Manfred

    2016-01-01

    This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…

  11. Inuit Sign Language: a contribution to sign language typology

    NARCIS (Netherlands)

    Schuit, J.; Baker, A.; Pfau, R.

    2011-01-01

    Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different

  12. Semantic Involvement of Initial and Final Lexical Embeddings during Sense-Making: The Advantage of Starting Late.

    Science.gov (United States)

    van Alphen, Petra M; van Berkum, Jos J A

    2012-01-01

    During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like day in daisy, or dean in sardine. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding (day in daisy) did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding (dean in sardine) did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.

  13. Semantic involvement of initial and final lexical embeddings during sense-making: The advantage of starting late

    Directory of Open Access Journals (Sweden)

    Petra M. Van Alphen

    2012-06-01

    Full Text Available During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like day in daisy, or dean in sardine. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding (day in daisy did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding (dean in sardine did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.

  14. Approaches for Language Identification in Mismatched Environments

    Science.gov (United States)

    2016-09-08

    domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...the process of identifying the language in a spoken speech utterance. In recent years, great improvements in LID system performance have been seen...be the case in practice. Lastly, we conduct an out-of-set experiment where VoA data from 9 other languages (Amharic, Creole, Croatian, English

  15. Semantic knowledge representation for information retrieval

    CERN Document Server

    Gödert, Winfried; Nagelschmidt, Matthias

    2014-01-01

    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.

  16. COMPARATIVE ANALYSIS OF EXISTING INTENSIVE METHODS OF TEACHING FOREIGN LANGUAGES

    Directory of Open Access Journals (Sweden)

    Maria Mytnyk

    2016-12-01

    Full Text Available The article deals with the study and analysis of comparable existing intensive methods of teaching foreign languages. This work is carried out to identify the positive and negative aspects of intensive methods of teaching foreign languages. The author traces the idea of rational organization and intensification of teaching foreign languages from their inception to the moment of their preparation in an integrated system. advantages and disadvantages of the most popular methods of intensive training also analyzed the characteristic of different historical periods, namely cugestopedichny method G. Lozanov method activation of reserve possibilities of students G. Kitaygorodskoy, emotional-semantic method I. Schechter, an intensive course of learning a foreign language L. Gegechkori , sugestokibernetichny integral method of accelerated learning a foreign language B. Petrusinskogo, a crash course in the study of spoken language by immersion A. Plesnevich. Analyzed the principles of learning and the role of each method in the development of methods of intensive foreign language training. The author identified a number of advantages and disadvantages of intensive methods of teaching foreign languages: 1 the assimilation of a large number of linguistic, lexical and grammatical units; 2 active use of acquired knowledge, skills and abilities in the practice of oral speech communication in a foreign language; 3 the ability to use language material resulting not only in his speech, but also in understanding the interlocutor; 4 overcoming psychological barriers, including fear of the possibility of making a mistake; 5 high efficiency and fast learning; 6 too much new language material that is presented; 7 training of oral forms of communication; 8 decline of grammatical units and models.

  17. Audiovisual semantic congruency during encoding enhances memory performance.

    Science.gov (United States)

    Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa

    2015-01-01

    Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.

  18. Spanish as a Second Language when L1 Is Quechua: Endangered Languages and the SLA Researcher

    Science.gov (United States)

    Kalt, Susan E.

    2012-01-01

    Spanish is one of the most widely spoken languages in the world. Quechua is the largest indigenous language family to constitute the first language (L1) of second language (L2) Spanish speakers. Despite sheer number of speakers and typologically interesting contrasts, Quechua-Spanish second language acquisition is a nearly untapped research area,…

  19. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  20. Using music to study the evolution of cognitive mechanisms relevant to language.

    Science.gov (United States)

    Patel, Aniruddh D

    2017-02-01

    This article argues that music can be used in cross-species research to study the evolution of cognitive mechanisms relevant to spoken language. This is because music and language share certain cognitive processing mechanisms and because music offers specific advantages for cross-species research. Music has relatively simple building blocks (tones without semantic properties), yet these building blocks are combined into rich hierarchical structures that engage complex cognitive processing. I illustrate this point with regard to the processing of musical harmonic structure. Because the processing of musical harmonic structure has been shown to interact with linguistic syntactic processing in humans, it is of interest to know if other species can acquire implicit knowledge of harmonic structure through extended exposure to music during development (vs. through explicit training). I suggest that domestic dogs would be a good species to study in addressing this question.

  1. Compiling Dictionaries Using Semantic Domains*

    Directory of Open Access Journals (Sweden)

    Ronald Moe

    2011-10-01

    Full Text Available

    Abstract: The task of providing dictionaries for all the world's languages is prodigious, re-quiring efficient techniques. The text corpus method cannot be used for minority languages lacking texts. To meet the need, the author has constructed a list of 1 600 semantic domains, which he has successfully used to collect words. In a workshop setting, a group of speakers can collect as many as 17 000 words in ten days. This method results in a classified word list that can be efficiently expanded into a full dictionary. The method works because the mental lexicon is a giant web or-ganized around key concepts. A semantic domain can be defined as an important concept together with the words directly related to it by lexical relations. A person can utilize the mental web to quickly jump from word to word within a domain. The author is developing a template for each domain to aid in collecting words and in de-scribing their semantics. Investigating semantics within the context of a domain yields many in-sights. The method permits the production of both alphabetically and semantically organized dic-tionaries. The list of domains is intended to be universal in scope and applicability. Perhaps due to universals of human experience and universals of linguistic competence, there are striking simi-larities in various lists of semantic domains developed for languages around the world. Using a standardized list of domains to classify multiple dictionaries opens up possibilities for cross-lin-guistic research into semantic and lexical universals.

    Keywords: SEMANTIC DOMAINS, SEMANTIC FIELDS, SEMANTIC CATEGORIES, LEX-ICAL RELATIONS, SEMANTIC PRIMITIVES, DOMAIN TEMPLATES, MENTAL LEXICON, SEMANTIC UNIVERSALS, MINORITY LANGUAGES, LEXICOGRAPHY

    Opsomming: Samestelling van woordeboeke deur gebruikmaking van se-mantiese domeine. Die taak van die voorsiening van woordeboeke aan al die tale van die wêreld is geweldig en vereis doeltreffende tegnieke. Die

  2. Ontology Matching with Semantic Verification.

    Science.gov (United States)

    Jean-Mary, Yves R; Shironoshita, E Patrick; Kabuka, Mansur R

    2009-09-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies.

  3. Relationship Structures and Semantic Type Assignments of the UMLS Enriched Semantic Network

    Science.gov (United States)

    Zhang, Li; Halper, Michael; Perl, Yehoshua; Geller, James; Cimino, James J.

    2005-01-01

    Objective: The Enriched Semantic Network (ESN) was introduced as an extension of the Unified Medical Language System (UMLS) Semantic Network (SN). Its multiple subsumption configuration and concomitant multiple inheritance make the ESN's relationship structures and semantic type assignments different from those of the SN. A technique for deriving the relationship structures of the ESN's semantic types and an automated technique for deriving the ESN's semantic type assignments from those of the SN are presented. Design: The technique to derive the ESN's relationship structures finds all newly inherited relationships in the ESN. All such relationships are audited for semantic validity, and the blocking mechanism is used to block invalid relationships. The mapping technique to derive the ESN's semantic type assignments uses current SN semantic type assignments and preserves nonredundant categorizations, while preventing new redundant categorizations. Results: Among the 426 newly inherited relationships, 326 are deemed valid. Seven blockings are applied to avoid inheritance of the 100 invalid relationships. Sixteen semantic types have different relationship structures in the ESN as compared to those in the SN. The mapping of semantic type assignments from the SN to the ESN avoids the generation of 26,950 redundant categorizations. The resulting ESN contains 138 semantic types, 149 IS-A links, 7,303 relationships, and 1,013,876 semantic type assignments. Conclusion: The ESN's multiple inheritance provides more complete relationship structures than in the SN. The ESN's semantic type assignments avoid the existing redundant categorizations appearing in the SN and prevent new ones that might arise due to multiple parents. Compared to the SN, the ESN provides a more accurate unifying semantic abstraction of the UMLS Metathesaurus. PMID:16049233

  4. Word-embeddings Italian semantic spaces: A semantic model for psycholinguistic research

    Directory of Open Access Journals (Sweden)

    Marelli Marco

    2017-01-01

    Full Text Available Distributional semantics has been for long a source of successful models in psycholinguistics, permitting to obtain semantic estimates for a large number of words in an automatic and fast way. However, resources in this respect remain scarce or limitedly accessible for languages different from English. The present paper describes WEISS (Word-Embeddings Italian Semantic Space, a distributional semantic model based on Italian. WEISS includes models of semantic representations that are trained adopting state-of-the-art word-embeddings methods, applying neural networks to induce distributed representations for lexical meanings. The resource is evaluated against two test sets, demonstrating that WEISS obtains a better performance with respect to a baseline encoding word associations. Moreover, an extensive qualitative analysis of the WEISS output provides examples of the model potentialities in capturing several semantic phenomena. Two variants of WEISS are released and made easily accessible via web through the SNAUT graphic interface.

  5. Structural borrowing: The case of Kenyan Sign Language (KSL) and ...

    African Journals Online (AJOL)

    Kenyan Sign Language (KSL) is a visual gestural language used by members of the deaf community in Kenya. Kiswahili on the other hand is a Bantu language that is used as the national language of Kenya. The two are world's apart, one being a spoken language and the other a signed language and thus their “… basic ...

  6. Different Loci of Semantic Interference in Picture Naming vs. Word-Picture Matching Tasks

    OpenAIRE

    Harvey, Denise Y.; Schnur, Tatiana T.

    2016-01-01

    Naming pictures and matching words to pictures belonging to the same semantic category impairs performance relative to when stimuli come from different semantic categories (i.e., semantic interference). Despite similar semantic interference phenomena in both picture naming and word-picture matching tasks, the locus of interference has been attributed to different levels of the language system – lexical in naming and semantic in word-picture matching. Although both tasks involve access to shar...

  7. SEMSIN SEMANTIC AND SYNTACTIC PARSER

    Directory of Open Access Journals (Sweden)

    K. K. Boyarsky

    2015-09-01

    Full Text Available The paper deals with the principle of operation for SemSin semantic and syntactic parser creating a dependency tree for the Russian language sentences. The parser consists of 4 blocks: a dictionary, morphological analyzer, production rules and lexical analyzer. An important logical part of the parser is pre-syntactical module, which harmonizes and complements morphological analysis results, separates the text paragraphs into individual sentences, and also carries out predisambiguation. Characteristic feature of the presented parser is an open type of control – it is done by means of a set of production rules. A varied set of commands provides the ability to both morphological and semantic-syntactic analysis of the sentence. The paper presents the sequence of rules usage and examples of their work. Specific feature of the rules is the decision making on establishment of syntactic links with simultaneous removal of the morphological and semantic ambiguity. The lexical analyzer provides the execution of commands and rules, and manages the parser in manual or automatic modes of the text analysis. In the first case, the analysis is performed interactively with the possibility of step-by-step execution of the rules and scanning the resulting parse tree. In the second case, analysis results are filed in an xml-file. Active usage of syntactic and semantic dictionary information gives the possibility to reduce significantly the ambiguity of parsing. In addition to marking the text, the parser is also usable as a tool for information extraction from natural language texts.

  8. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    window focused over the part which most likely contains an answer to the query. The two systems are integrated into a full spoken query answering system. The prototype can answer queries and questions within the chosen football (soccer) test domain, but the system has the flexibility for being ported...

  9. SPOKEN AYACUCHO QUECHUA, UNITS 11-20.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THE ESSENTIALS OF AYACUCHO GRAMMAR WERE PRESENTED IN THE FIRST VOLUME OF THIS SERIES, SPOKEN AYACUCHO QUECHUA, UNITS 1-10. THE 10 UNITS IN THIS VOLUME (11-20) ARE INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE, AND PRESENT THE STUDENT WITH LENGTHIER AND MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS AS WELL…

  10. SPOKEN CUZCO QUECHUA, UNITS 7-12.

    Science.gov (United States)

    SOLA, DONALD F.; AND OTHERS

    THIS SECOND VOLUME OF AN INTRODUCTORY COURSE IN SPOKEN CUZCO QUECHUA ALSO COMPRISES ENOUGH MATERIAL FOR ONE INTENSIVE SUMMER SESSION COURSE OR ONE SEMESTER OF SEMI-INTENSIVE INSTRUCTION (120 CLASS HOURS). THE METHOD OF PRESENTATION IS ESSENTIALLY THE SAME AS IN THE FIRST VOLUME WITH FURTHER CONTRASTIVE, LINGUISTIC ANALYSIS OF ENGLISH-QUECHUA…

  11. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    Science.gov (United States)

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  12. SPOKEN AYACUCHO QUECHUA. UNITS 1-10.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THIS BEGINNING COURSE IN AYACUCHO QUECHUA, SPOKEN BY ABOUT A MILLION PEOPLE IN SOUTH-CENTRAL PERU, WAS PREPARED TO INTRODUCE THE PHONOLOGY AND GRAMMAR OF THIS DIALECT TO SPEAKERS OF ENGLISH. THE FIRST OF TWO VOLUMES, IT SERVES AS A TEXT FOR A 6-WEEK INTENSIVE COURSE OF 20 CLASS HOURS A WEEK. THE AUTHORS COMPARE AND CONTRAST SIGNIFICANT FEATURES OF…

  13. A Grammar of Spoken Brazilian Portuguese.

    Science.gov (United States)

    Thomas, Earl W.

    This is a first-year text of Portuguese grammar based on the Portuguese of moderately educated Brazilians from the area around Rio de Janeiro. Spoken idiomatic usage is emphasized. An important innovation is found in the presentation of verb tenses; they are presented in the order in which the native speaker learns them. The text is intended to…

  14. Towards Affordable Disclosure of Spoken Word Archives

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; Heeren, W.F.L.; Huijbregts, M.A.H.; Hiemstra, Djoerd; de Jong, Franciska M.G.; Larson, M; Fernie, K; Oomen, J; Cigarran, J.

    2008-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken word archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, the least we want to be

  15. Towards Affordable Disclosure of Spoken Heritage Archives

    NARCIS (Netherlands)

    Larson, M; Ordelman, Roeland J.F.; Heeren, W.F.L.; Fernie, K; de Jong, Franciska M.G.; Huijbregts, M.A.H.; Oomen, J; Hiemstra, Djoerd

    2009-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken heritage archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, we at least want to

  16. Mapping Students' Spoken Conceptions of Equality

    Science.gov (United States)

    Anakin, Megan

    2013-01-01

    This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…

  17. What factors underlie children's susceptibility to semantic and phonological false memories? investigating the roles of language skills and auditory short-term memory.

    Science.gov (United States)

    McGeown, Sarah P; Gray, Eleanor A; Robinson, Jamey L; Dewhurst, Stephen A

    2014-06-01

    Two experiments investigated the cognitive skills that underlie children's susceptibility to semantic and phonological false memories in the Deese/Roediger-McDermott procedure (Deese, 1959; Roediger & McDermott, 1995). In Experiment 1, performance on the Verbal Similarities subtest of the British Ability Scales (BAS) II (Elliott, Smith, & McCulloch, 1997) predicted correct and false recall of semantic lures. In Experiment 2, performance on the Yopp-Singer Test of Phonemic Segmentation (Yopp, 1988) did not predict correct recall, but inversely predicted the false recall of phonological lures. Auditory short-term memory was a negative predictor of false recall in Experiment 1, but not in Experiment 2. The findings are discussed in terms of the formation of gist and verbatim traces as proposed by fuzzy trace theory (Reyna & Brainerd, 1998) and the increasing automaticity of associations as proposed by associative activation theory (Howe, Wimmer, Gagnon, & Plumpton, 2009). Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Parental mode of communication is essential for speech and language outcomes in cochlear implanted children

    DEFF Research Database (Denmark)

    Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina

    2010-01-01

    The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....

  19. Business Spoken English Learning Strategies for Chinese Enterprise Staff

    Institute of Scientific and Technical Information of China (English)

    Han Li

    2013-01-01

    This study addresses the issue of promoting effective Business Spoken English of Enterprise Staff in China.It aims to assess the assessment of spoken English learning methods and identify the difficulties of learning English oral expression concerned business area.It also provides strategies for enhancing Enterprise Staff’s level of Business Spoken English.

  20. Language and Literacy: The Case of India.

    Science.gov (United States)

    Sridhar, Kamal K.

    Language and literacy issues in India are reviewed in terms of background, steps taken to combat illiteracy, and some problems associated with literacy. The following facts are noted: India has 106 languages spoken by more than 685 million people, there are several minor script systems, a major language has different dialects, a language may use…