WorldWideScience

Sample records for spoken language casl

  1. CASL, the Common Algebraic Specification Language

    DEFF Research Database (Denmark)

    Mossakowski, Till; Haxthausen, Anne Elisabeth; Sannella, Donald

    2008-01-01

    CASL is an expressive specification language that has been designed to supersede many existing algebraic specification languages and provide a standard. CASL consists of several layers, including basic (unstructured) specifications, structured specifications and architectural specifications...

  2. CASL The Common Algebraic Specification Language Semantics

    DEFF Research Database (Denmark)

    Haxthausen, Anne

    1998-01-01

    This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the first complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop at Cachan in November 1998.......This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the first complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop at Cachan in November 1998....

  3. CASL- The Common Algebraic Specification Language- Summary

    DEFF Research Database (Denmark)

    Haxthausen, Anne

    1997-01-01

    This Summary is the basis for the Design Proposal [LD97b] for CASL, the Common Algebraic Specification Language, prepared by the Language Design Task Group of CoFI, the Common Framework Initiative. It gives the abstract syntax, and informally describes its intended semantics. It is accompanied...... of this Summary shows just which bits of CASL are currently subject to reconsideration or revision, in view of the referees' comments and the recommendations made by the CoFI Semantics Task Group [Sem97c]. Changes that were made since the previous version are highlighted in the same way as this sentence. Points...

  4. Development of parsing tools for Casl using generic language technology

    NARCIS (Netherlands)

    M.G.J. van den Brand (Mark); J. Scheerder

    2000-01-01

    textabstractAn environment for the Common Algebraic Specification Language CASL consists of independent tools. A number of CASL have been built using the algebraic formalism ASF+SDF and the+SDF Meta-Environment. CASL supports-defined syntax which is non-trivial to: ASF+SDF offers a powerful

  5. CASL - The CoFI Algebraic Specification Language - Semantics

    DEFF Research Database (Denmark)

    Haxthausen, Anne

    1999-01-01

    This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the second complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop in Amsterdam in March 1999.......This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the second complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop in Amsterdam in March 1999....

  6. Teaching the Spoken Language.

    Science.gov (United States)

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  7. Spoken Language Understanding Software for Language Learning

    Directory of Open Access Journals (Sweden)

    Hassan Alam

    2008-04-01

    Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.

  8. Native language, spoken language, translation and trade

    OpenAIRE

    Jacques Melitz; Farid Toubal

    2012-01-01

    We construct new series for common native language and common spoken language for 195 countries, which we use together with series for common official language and linguis-tic proximity in order to draw inferences about (1) the aggregate impact of all linguistic factors on bilateral trade, (2) whether the linguistic influences come from ethnicity and trust or ease of communication, and (3) in so far they come from ease of communication, to what extent trans-lation and interpreters play a role...

  9. Sentence Recognition in Quiet and Noise by Pediatric Cochlear Implant Users: Relationships to Spoken Language.

    Science.gov (United States)

    Eisenberg, Laurie S; Fisher, Laurel M; Johnson, Karen C; Ganguly, Dianne Hammes; Grace, Thelma; Niparko, John K

    2016-02-01

    We investigated associations between sentence recognition and spoken language for children with cochlear implants (CI) enrolled in the Childhood Development after Cochlear Implantation (CDaCI) study. In a prospective longitudinal study, sentence recognition percent-correct scores and language standard scores were correlated at 48-, 60-, and 72-months post-CI activation. Six tertiary CI centers in the United States. Children with CIs participating in the CDaCI study. Cochlear implantation. Sentence recognition was assessed using the Hearing In Noise Test for Children (HINT-C) in quiet and at +10, +5, and 0 dB signal-to-noise ratio (S/N). Spoken language was assessed using the Clinical Assessment of Spoken Language (CASL) core composite and the antonyms, paragraph comprehension (syntax comprehension), syntax construction (expression), and pragmatic judgment tests. Positive linear relationships were found between CASL scores and HINT-C sentence scores when the sentences were delivered in quiet and at +10 and +5 dB S/N, but not at 0 dB S/N. At 48 months post-CI, sentence scores at +10 and +5 dB S/N were most strongly associated with CASL antonyms. At 60 and 72 months, sentence recognition in noise was most strongly associated with paragraph comprehension and syntax construction. Children with CIs learn spoken language in a variety of acoustic environments. Despite the observed inconsistent performance in different listening situations and noise-challenged environments, many children with CIs are able to build lexicons and learn the rules of grammar that enable recognition of sentences.

  10. Spoken language corpora for the nine official African languages of ...

    African Journals Online (AJOL)

    Spoken language corpora for the nine official African languages of South Africa. Jens Allwood, AP Hendrikse. Abstract. In this paper we give an outline of a corpus planning project which aims to develop linguistic resources for the nine official African languages of South Africa in the form of corpora, more specifically spoken ...

  11. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  12. Direction Asymmetries in Spoken and Signed Language Interpreting

    Science.gov (United States)

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  13. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  14. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  15. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Assessing spoken-language educational interpreting: Measuring up and measuring right. Lenelle Foster, Adriaan Cupido. Abstract. This article, primarily, presents a critical evaluation of the development and refinement of the assessment instrument used to assess formally the spoken-language educational interpreters at ...

  16. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... sound of that language. These language-specific properties can be exploited to identify a spoken language reliably. Automatic language identification has emerged as a prominent research area in. Indian languages processing. People from different regions of India speak around 800 different languages.

  17. Automatic disambiguation of morphosyntax in spoken language corpora

    OpenAIRE

    Parisse , Christophe; Le Normand , Marie-Thérèse

    2000-01-01

    International audience; The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automa...

  18. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  19. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  20. "Visual" Cortex Responds to Spoken Language in Blind Children.

    Science.gov (United States)

    Bedny, Marina; Richardson, Hilary; Saxe, Rebecca

    2015-08-19

    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.

  1. Development of a spoken language identification system for South African languages

    CSIR Research Space (South Africa)

    Peché, M

    2009-12-01

    Full Text Available This article introduces the first Spoken Language Identification system developed to distinguish among all eleven of South Africa’s official languages. The PPR-LM (Parallel Phoneme Recognition followed by Language Modeling) architecture...

  2. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Kate H

    assessment instrument used to assess formally the spoken-language educational interpreters at. Stellenbosch University (SU). Research ..... Is the interpreter suited to the module? Is the interpreter easier to follow? Technical. Microphone technique. Lag. Completeness. Language use. Vocabulary. Role. Personal Objectives ...

  3. IMPACT ON THE INDIGENOUS LANGUAGES SPOKEN IN NIGERIA ...

    African Journals Online (AJOL)

    This article examines the impact of the hegemony of English, as a common lingua franca, referred to as a global language, on the indigenous languages spoken in Nigeria. Since English, through the British political imperialism and because of the economic supremacy of English dominated countries, has assumed the ...

  4. CASL Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States)

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation and verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.

  5. Does textual feedback hinder spoken interaction in natural language?

    Science.gov (United States)

    Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois

    2010-01-01

    The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.

  6. Porting a spoken language identification systen to a new environment.

    CSIR Research Space (South Africa)

    Peche, M

    2008-11-01

    Full Text Available the carefully selected training data used to construct the system initially. The authors investigated the process of porting a Spoken Language Identification (S-LID) system to a new environment and describe methods to prepare it for more effective use...

  7. Automatic disambiguation of morphosyntax in spoken language corpora.

    Science.gov (United States)

    Parisse, C; Le Normand, M T

    2000-08-01

    The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automatic disambiguation of lexical tags. The tool presented here (POST) can tag and disambiguate a large text in a few seconds. This tool complements systems dealing with language transcription and suggests further theoretical developments in the assessment of the status of morphosyntax in spoken language corpora. The program currently works for French and English, but it can be easily adapted for use with other languages. The analysis and computation of a corpus produced by normal French children 2-4 years of age, as well as of a sample corpus produced by French SLI children, are given as examples.

  8. The Child's Path to Spoken Language.

    Science.gov (United States)

    Locke, John L.

    A major synthesis of the latest research on early language acquisition, this book explores what gives infants the remarkable capacity to progress from babbling to meaningful sentences, and what inclines a child to speak. The book examines the neurological, perceptual, social, and linguistic aspects of language acquisition in young children, from…

  9. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  10. Phonotactic spoken language identification with limited training data

    CSIR Research Space (South Africa)

    Peche, M

    2007-08-01

    Full Text Available rates when no Japanese acoustic models are constructed. An increasing amount of Japanese training data is used to train the language classifier of an English-only (E), an English-French (EF), and an English-French-Portuguese PPR system. ple.... Experimental design 3.1. Corpora Because of their role as world languages that are widely spoken in Africa, our initial LID system was designed to distinguish between English, French and Portuguese. We therefore trained phone recognizers and language...

  11. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    Science.gov (United States)

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  12. Spoken English Language Development Among Native Signing Children With Cochlear Implants

    OpenAIRE

    Davidson, Kathryn; Lillo-Martin, Diane; Chen Pichler, Deborah

    2013-01-01

    Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken En...

  13. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  14. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    Science.gov (United States)

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  15. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  16. Spoken Language Production in Young Adults: Examining Syntactic Complexity.

    Science.gov (United States)

    Nippold, Marilyn A; Frantz-Kaspar, Megan W; Vigeland, Laura M

    2017-05-24

    In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language impairment. Forty adults (mean age = 22 years, 10 months) with typical language development participated in an interview that consisted of 3 speaking tasks: a general conversation about common, everyday topics; a narrative retelling task that involved fables; and a question-and-answer, critical-thinking task about the fables. Each speaker's interview was audio-recorded, transcribed, broken into communication units, coded for main and subordinate clauses, entered into Systematic Analysis of Language Transcripts (Miller, Iglesias, & Nockerts, 2004), and analyzed for mean length of communication unit and clausal density. Both the narrative and critical-thinking tasks elicited significantly greater syntactic complexity than the conversational task. It was also found that syntactic complexity was significantly greater during the narrative task than the critical-thinking task. Syntactic complexity was best revealed by a narrative task that involved fables. The study offers benchmarks for language development during early adulthood.

  17. Native Language Spoken as a Risk Marker for Tooth Decay.

    Science.gov (United States)

    Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M

    2015-01-01

    The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.

  18. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  19. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  20. Give and take: syntactic priming during spoken language comprehension.

    Science.gov (United States)

    Thothathiri, Malathi; Snedeker, Jesse

    2008-07-01

    Syntactic priming during language production is pervasive and well-studied. Hearing, reading, speaking or writing a sentence with a given structure increases the probability of subsequently producing the same structure, regardless of whether the prime and target share lexical content. In contrast, syntactic priming during comprehension has proven more elusive, fueling claims that comprehension is less dependent on general syntactic representations and more dependent on lexical knowledge. In three experiments we explored syntactic priming during spoken language comprehension. Participants acted out double-object (DO) or prepositional-object (PO) dative sentences while their eye movements were recorded. Prime sentences used different verbs and nouns than the target sentences. In target sentences, the onset of the direct-object noun was consistent with both an animate recipient and an inanimate theme, creating a temporary ambiguity in the argument structure of the verb (DO e.g., Show the horse the book; PO e.g., Show the horn to the dog). We measured the difference in looks to the potential recipient and the potential theme during the ambiguous interval. In all experiments, participants who heard DO primes showed a greater preference for the recipient over the theme than those who heard PO primes, demonstrating across-verb priming during online language comprehension. These results accord with priming found in production studies, indicating a role for abstract structural information during comprehension as well as production.

  1. CASL Dakota Capabilities Summary

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Simmons, Chris [Univ. of Texas, Austin, TX (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-10

    The Dakota software project serves the mission of Sandia National Laboratories and supports a worldwide user community by delivering state-of-the-art research and robust, usable software for optimization and uncertainty quantification. These capabilities enable advanced exploration and riskinformed prediction with a wide range of computational science and engineering models. Dakota is the verification and validation (V&V) / uncertainty quantification (UQ) software delivery vehicle for CASL, allowing analysts across focus areas to apply these capabilities to myriad nuclear engineering analyses.

  2. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    Science.gov (United States)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  3. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.

    2016-01-01

    BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  4. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2016-01-01

    Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  5. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  6. Spoken language interface for a network management system

    Science.gov (United States)

    Remington, Robert J.

    1999-11-01

    Leaders within the Information Technology (IT) industry are expressing a general concern that the products used to deliver and manage today's communications network capabilities require far too much effort to learn and to use, even by highly skilled and increasingly scarce support personnel. The usability of network management systems must be significantly improved if they are to deliver the performance and quality of service needed to meet the ever-increasing demand for new Internet-based information and services. Fortunately, recent advances in spoken language (SL) interface technologies show promise for significantly improving the usability of most interactive IT applications, including network management systems. The emerging SL interfaces will allow users to communicate with IT applications through words and phases -- our most familiar form of everyday communication. Recent advancements in SL technologies have resulted in new commercial products that are being operationally deployed at an increasing rate. The present paper describes a project aimed at the application of new SL interface technology for improving the usability of an advanced network management system. It describes several SL interface features that are being incorporated within an existing system with a modern graphical user interface (GUI), including 3-D visualization of network topology and network performance data. The rationale for using these SL interface features to augment existing user interfaces is presented, along with selected task scenarios to provide insight into how a SL interface will simplify the operator's task and enhance overall system usability.

  7. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    Science.gov (United States)

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  8. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  9. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  10. Neural stages of spoken, written, and signed word processing in beginning second language learners.

    Science.gov (United States)

    Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric

    2013-01-01

    WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.

  11. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    Science.gov (United States)

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  12. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  15. What Comes First, What Comes Next: Information Packaging in Written and Spoken Language

    Directory of Open Access Journals (Sweden)

    Vladislav Smolka

    2017-07-01

    Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.

  16. THE IMPLEMENTATION OF COMMUNICATIVE LANGUAGE TEACHING (CLT TO TEACH SPOKEN RECOUNTS IN SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Eri Rusnawati

    2016-10-01

    Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa

  17. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    Science.gov (United States)

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  18. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    Science.gov (United States)

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  19. Effects of Tasks on Spoken Interaction and Motivation in English Language Learners

    Science.gov (United States)

    Carrero Pérez, Nubia Patricia

    2016-01-01

    Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…

  20. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  1. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    Science.gov (United States)

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  2. ORIGINAL ARTICLES How do doctors learn the spoken language of ...

    African Journals Online (AJOL)

    2009-07-01

    Jul 1, 2009 ... correct language that has been acquired through listening. The Brewsters17 suggest an 'immersion experience' by living with speakers of the language. Ellis included several of their tools, such as loop tapes, as being useful in a consultation when learning a language.15 Others disagree with a purely.

  3. Comparing spoken language treatments for minimally verbal preschoolers with autism spectrum disorders.

    Science.gov (United States)

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-02-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in each group achieved benchmarks for the first stage of functional spoken language development, as defined by Tager-Flusberg et al. (J Speech Lang Hear Res, 52: 643-652, 2009). Analyses of moderators of treatment suggest that joint attention moderates response to both treatments, and children with better receptive language pre-treatment do better with the naturalistic method, while those with lower receptive language show better response to the discrete trial treatment. The implications of these findings are discussed.

  4. Processing Relationships Between Language-Being-Spoken and Other Speech Dimensions in Monolingual and Bilingual Listeners.

    Science.gov (United States)

    Vaughn, Charlotte R; Bradlow, Ann R

    2017-12-01

    While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3). Results demonstrate that language-being-spoken is integrated in processing with each of the other dimensions tested, and that these processing dependencies seem to be independent of listeners' bilingual status or experience with the languages tested. Moreover, the data reveal processing interference asymmetries, suggesting a processing hierarchy for indexical, non-linguistic speech features.

  5. Factors Influencing Verbal Intelligence and Spoken Language in Children with Phenylketonuria.

    Science.gov (United States)

    Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre

    2015-05-01

    To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.

  6. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment.

    Science.gov (United States)

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. In Experiment 1, 69 children with TLD (7-10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7-12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.

  7. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    Science.gov (United States)

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  8. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  9. Predictors of spoken language development following pediatric cochlear implantation.

    Science.gov (United States)

    Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid

    2012-01-01

    Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple

  10. Predictors of Spoken Language Development Following Pediatric Cochlear Implantation

    NARCIS (Netherlands)

    Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters

    2012-01-01

    Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to

  11. Thermal hydraulics development for CASL

    Energy Technology Data Exchange (ETDEWEB)

    Lowrie, Robert B [Los Alamos National Laboratory

    2010-12-07

    This talk will describe the technical direction of the Thermal-Hydraulics (T-H) Project within the Consortium for Advanced Simulation of Light Water Reactors (CASL) Department of Energy Innovation Hub. CASL is focused on developing a 'virtual reactor', that will simulate the physical processes that occur within a light-water reactor. These simulations will address several challenge problems, defined by laboratory, university, and industrial partners that make up CASL. CASL's T-H efforts are encompassed in two sub-projects: (1) Computational Fluid Dynamics (CFD), (2) Interface Treatment Methods (ITM). The CFD subproject will develop non-proprietary, scalable, verified and validated macroscale CFD simulation tools. These tools typically require closures for their turbulence and boiling models, which will be provided by the ITM sub-project, via experiments and microscale (such as DNS) simulation results. The near-term milestones and longer term plans of these two sub-projects will be discussed.

  12. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  13. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  14. CASL Validation Data: An Initial Review

    Energy Technology Data Exchange (ETDEWEB)

    Nam Dinh

    2011-01-01

    The study aims to establish a comprehensive view of “data” needed for supporting implementation of the Consortium of Advanced Simulation of LWRs (CASL). Insights from this review (and its continual refinement), together with other elements developed in CASL, should provide the foundation for developing the CASL Validation Data Plan (VDP). VDP is instrumental to the development and assessment of CASL simulation tools as predictive capability. Most importantly, to be useful for CASL, the VDP must be devised (and agreed upon by all participating stakeholders) with appropriate account for nature of nuclear engineering applications, the availability, types and quality of CASL-related data, and novelty of CASL goals and its approach to the selected challenge problems. The initial review (summarized on the January 2011 report version) discusses a broad range of methodological issues in data review and Validation Data Plan. Such a top-down emphasis in data review is both needed to see a big picture on CASL data and appropriate when the actual data are not available for detailed scrutiny. As the data become available later in 2011, a revision of data review (and regular update) should be performed. It is expected that the basic framework for review laid out in this report will help streamline the CASL data review in a way that most pertinent to CASL VDP.

  15. Enriching English Language Spoken Outputs of Kindergartners in Thailand

    Science.gov (United States)

    Wilang, Jeffrey Dawala; Sinwongsuwat, Kemtong

    2012-01-01

    This year is designated as Thailand's "English Speaking Year" with the aim of improving the communicative competence of Thais for the upcoming integration of the Association of Southeast Asian Nations (ASEAN) in 2015. The consistent low-level proficiency of the Thais in the English language has led to numerous curriculum revisions and…

  16. Loops of Spoken Language i Danish Broadcasting Corporation News

    DEFF Research Database (Denmark)

    le Fevre Jakobsen, Bjarne

    2012-01-01

    with well-edited material, in 1965, to an anchor who hands over to journalists in live feeds from all over the world via satellite, Skype, or mobile telephone, in 2011. The narrative rhythm is faster and sometimes more spontaneous. In this article we will discuss aspects of the use of language and the tempo...

  17. Subsorted Partial Higher-order Logic as an Extension of CASL

    DEFF Research Database (Denmark)

    Haxthausen, Anne; Mossakowski, Till; Krieg-Bruckner, Bernd

    1999-01-01

    CASL is a specification language combining first-order logic, partiality and subsorting. However, inmany applications, first-order logic does not suffice. Consider e.g. the specification of functionalprograms, which often gain their conciseness through the use of higher-order functions, or thespe......CASL is a specification language combining first-order logic, partiality and subsorting. However, inmany applications, first-order logic does not suffice. Consider e.g. the specification of functionalprograms, which often gain their conciseness through the use of higher-order functions...... the need to get a faithful embedding offirst-order CASL into higher-order CASL. Finally, it is discussed how a proof calculus for the proposedlogic can be developed....

  18. Subsorted Partial Higher-order Logic as an extension of CASL

    DEFF Research Database (Denmark)

    Mossakowski, T.; Haxthausen, Anne Elisabeth; Krieg-Brückner, B.

    2000-01-01

    CASL is a specification language combining first-order logic, partiality and subsorting. However, inmany applications, first-order logic does not suffice. Consider e.g. the specification of functionalprograms, which often gain their conciseness through the use of higher-order functions, or thespe......CASL is a specification language combining first-order logic, partiality and subsorting. However, inmany applications, first-order logic does not suffice. Consider e.g. the specification of functionalprograms, which often gain their conciseness through the use of higher-order functions...... the need to get a faithful embedding offirst-order CASL into higher-order CASL. Finally, it is discussed how a proof calculus for the proposedlogic can be developed....

  19. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    Science.gov (United States)

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  1. Medical practices display power law behaviors similar to spoken languages.

    Science.gov (United States)

    Paladino, Jonathan D; Crooke, Philip S; Brackney, Christopher R; Kaynar, A Murat; Hotchkiss, John R

    2013-09-04

    Medical care commonly involves the apprehension of complex patterns of patient derangements to which the practitioner responds with patterns of interventions, as opposed to single therapeutic maneuvers. This complexity renders the objective assessment of practice patterns using conventional statistical approaches difficult. Combinatorial approaches drawn from symbolic dynamics are used to encode the observed patterns of patient derangement and associated practitioner response patterns as sequences of symbols. Concatenating each patient derangement symbol with the contemporaneous practitioner response symbol creates "words" encoding the simultaneous patient derangement and provider response patterns and yields an observed vocabulary with quantifiable statistical characteristics. A fundamental observation in many natural languages is the existence of a power law relationship between the rank order of word usage and the absolute frequency with which particular words are uttered. We show that population level patterns of patient derangement: practitioner intervention word usage in two entirely unrelated domains of medical care display power law relationships similar to those of natural languages, and that-in one of these domains-power law behavior at the population level reflects power law behavior at the level of individual practitioners. Our results suggest that patterns of medical care can be approached using quantitative linguistic techniques, a finding that has implications for the assessment of expertise, machine learning identification of optimal practices, and construction of bedside decision support tools.

  2. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  3. The Attitudes and Motivation of Children towards Learning Rarely Spoken Foreign Languages: A Case Study from Saudi Arabia

    Science.gov (United States)

    Al-Nofaie, Haifa

    2018-01-01

    This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…

  4. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  5. Machine Translation Projects for Portuguese at INESC ID's Spoken Language Systems Laboratory

    Directory of Open Access Journals (Sweden)

    Anabela Barreiro

    2014-12-01

    Full Text Available Language technologies, in particular machine translation applications, have the potential to help break down linguistic and cultural barriers, presenting an important contribution to the globalization and internationalization of the Portuguese language, by allowing content to be shared 'from' and 'to' this language. This article aims to present the research work developed at the Laboratory of Spoken Language Systems of INESC-ID in the field of machine translation, namely the automated speech translation, the translation of microblogs and the creation of a hybrid machine translation system. We will focus on the creation of the hybrid system, which aims at combining linguistic knowledge, in particular semantico-syntactic knowledge, with statistical knowledge, to increase the level of translation quality.

  6. Spoken language development in oral preschool children with permanent childhood deafness.

    Science.gov (United States)

    Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.

  7. Symbolic gestures and spoken language are processed by a common neural system.

    Science.gov (United States)

    Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R

    2009-12-08

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.

  8. Brain Basis of Phonological Awareness for Spoken Language in Children and Its Disruption in Dyslexia

    Science.gov (United States)

    Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.

    2012-01-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783

  9. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  10. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    Directory of Open Access Journals (Sweden)

    Sarah eHirschmüller

    2016-01-01

    Full Text Available How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a positive emotion word usage base rates in spoken and written materials and (b positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  11. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    Science.gov (United States)

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  12. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. Auditory-verbal therapy for promoting spoken language development in children with permanent hearing impairments.

    Science.gov (United States)

    Brennan-Jones, Christopher G; White, Jo; Rush, Robert W; Law, James

    2014-03-12

    Congenital or early-acquired hearing impairment poses a major barrier to the development of spoken language and communication. Early detection and effective (re)habilitative interventions are essential for parents and families who wish their children to achieve age-appropriate spoken language. Auditory-verbal therapy (AVT) is a (re)habilitative approach aimed at children with hearing impairments. AVT comprises intensive early intervention therapy sessions with a focus on audition, technological management and involvement of the child's caregivers in therapy sessions; it is typically the only therapy approach used to specifically promote avoidance or exclusion of non-auditory facial communication. The primary goal of AVT is to achieve age-appropriate spoken language and for this to be used as the primary or sole method of communication. AVT programmes are expanding throughout the world; however, little evidence can be found on the effectiveness of the intervention. To assess the effectiveness of auditory-verbal therapy (AVT) in developing receptive and expressive spoken language in children who are hearing impaired. CENTRAL, MEDLINE, EMBASE, PsycINFO, CINAHL, speechBITE and eight other databases were searched in March 2013. We also searched two trials registers and three theses repositories, checked reference lists and contacted study authors to identify additional studies. The review considered prospective randomised controlled trials (RCTs) and quasi-randomised studies of children (birth to 18 years) with a significant (≥ 40 dBHL) permanent (congenital or early-acquired) hearing impairment, undergoing a programme of auditory-verbal therapy, administered by a certified auditory-verbal therapist for a period of at least six months. Comparison groups considered for inclusion were waiting list and treatment as usual controls. Two review authors independently assessed titles and abstracts identified from the searches and obtained full-text versions of all potentially

  14. A Spoken Language Intervention for School-Aged Boys with fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2015-01-01

    Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214

  15. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  16. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  17. Spoken Lebanese.

    Science.gov (United States)

    Feghali, Maksoud N.

    This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…

  18. The relation of the number of languages spoken to performance in different cognitive abilities in old age.

    Science.gov (United States)

    Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias

    2016-12-01

    Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.

  19. Spoken language interaction with model uncertainty: an adaptive human-robot interaction system

    Science.gov (United States)

    Doshi, Finale; Roy, Nicholas

    2008-12-01

    Spoken language is one of the most intuitive forms of interaction between humans and agents. Unfortunately, agents that interact with people using natural language often experience communication errors and do not correctly understand the user's intentions. Recent systems have successfully used probabilistic models of speech, language and user behaviour to generate robust dialogue performance in the presence of noisy speech recognition and ambiguous language choices, but decisions made using these probabilistic models are still prone to errors owing to the complexity of acquiring and maintaining a complete model of human language and behaviour. In this paper, a decision-theoretic model for human-robot interaction using natural language is described. The algorithm is based on the Partially Observable Markov Decision Process (POMDP), which allows agents to choose actions that are robust not only to uncertainty from noisy or ambiguous speech recognition but also unknown user models. Like most dialogue systems, a POMDP is defined by a large number of parameters that may be difficult to specify a priori from domain knowledge, and learning these parameters from the user may require an unacceptably long training period. An extension to the POMDP model is described that allows the agent to acquire a linguistic model of the user online, including new vocabulary and word choice preferences. The approach not only avoids a training period of constant questioning as the agent learns, but also allows the agent actively to query for additional information when its uncertainty suggests a high risk of mistakes. The approach is demonstrated both in simulation and on a natural language interaction system for a robotic wheelchair application.

  20. Influence of Spoken Language on the Initial Acquisition of Reading/Writing: Critical Analysis of Verbal Deficit Theory

    Science.gov (United States)

    Ramos-Sanchez, Jose Luis; Cuadrado-Gordillo, Isabel

    2004-01-01

    This article presents the results of a quasi-experimental study of whether there exists a causal relationship between spoken language and the initial learning of reading/writing. The subjects were two matched samples each of 24 preschool pupils (boys and girls), controlling for certain relevant external variables. It was found that there was no…

  1. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  2. Semantic Relations Cause Interference in Spoken Language Comprehension When Using Repeated Definite References, Not Pronouns.

    Science.gov (United States)

    Peters, Sara A; Boiteau, Timothy W; Almor, Amit

    2016-01-01

    The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis.

  3. About Development and Innovation of the Slovak Spoken Language Dialogue System

    Directory of Open Access Journals (Sweden)

    Jozef Juhár

    2009-05-01

    Full Text Available The research and development of the Slovak spoken language dialogue system (SLDS is described in the paper. The dialogue system is based on the DARPA Communicator architecture and was developed in the period from July 2003 to June 2006. It consists of the Galaxy hub and telephony, automatic speech recognition, text-to-speech, backend, transport and VoiceXML dialogue management and automatic evaluation modules. The dialogue system is demonstrated and tested via two pilot applications, „Weather Forecast“ and „Public Transport Timetables“. The required information is retrieved from Internet resources in multi-user mode through PSTN, ISDN, GSM and/or VoIP network. Some innovation development has been performed since 2006 which is also described in the paper.

  4. Grammatical awareness in the spoken and written language of language-disabled children.

    Science.gov (United States)

    Rubin, H; Kantor, M; Macnab, J

    1990-12-01

    Experiments examined grammatical judgement, and error-identification deficits in relation to expressive language skills and to morphemic errors in writing. Language-disabled subjects did not differ from language-matched controls on judgement, revision, or error identification. Age-matched controls represented more morphemes in elicited writing than either of the other groups, which were equivalent. However, in spontaneous writing, language-disabled subjects made more frequent morphemic errors than age-matched controls, but language-matched subjects did not differ from either group. Proficiency relative to academic experience and oral language status and to remedial implications are discussed.

  5. Foreign body aspiration and language spoken at home: 10-year review.

    Science.gov (United States)

    Choroomi, S; Curotta, J

    2011-07-01

    To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.

  6. A common neural system is activated in hearing non-signers to process French sign language and spoken French.

    Science.gov (United States)

    Courtin, Cyril; Jobard, Gael; Vigneau, Mathieu; Beaucousin, Virginie; Razafimandimby, Annick; Hervé, Pierre-Yves; Mellet, Emmanuel; Zago, Laure; Petit, Laurent; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie

    2011-01-15

    We used functional magnetic resonance imaging to investigate the areas activated by signed narratives in non-signing subjects naïve to sign language (SL) and compared it to the activation obtained when hearing speech in their mother tongue. A subset of left hemisphere (LH) language areas activated when participants watched an audio-visual narrative in their mother tongue was activated when they observed a signed narrative. The inferior frontal (IFG) and precentral (Prec) gyri, the posterior parts of the planum temporale (pPT) and of the superior temporal sulcus (pSTS), and the occipito-temporal junction (OTJ) were activated by both languages. The activity of these regions was not related to the presence of communicative intent because no such changes were observed when the non-signers watched a muted video of a spoken narrative. Recruitment was also not triggered by the linguistic structure of SL, because the areas, except pPT, were not activated when subjects listened to an unknown spoken language. The comparison of brain reactivity for spoken and sign languages shows that SL has a special status in the brain compared to speech; in contrast to unknown oral language, the neural correlates of SL overlap LH speech comprehension areas in non-signers. These results support the idea that strong relationships exist between areas involved in human action observation and language, suggesting that the observation of hand gestures have shaped the lexico-semantic language areas as proposed by the motor theory of speech. As a whole, the present results support the theory of a gestural origin of language. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Verbal short-term memory development and spoken language outcomes in deaf children with cochlear implants.

    Science.gov (United States)

    Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B

    2013-01-01

    Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures

  8. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    Science.gov (United States)

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  9. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    Science.gov (United States)

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  10. Satisfaction with telemedicine for teaching listening and spoken language to children with hearing loss.

    Science.gov (United States)

    Constantinescu, Gabriella

    2012-07-01

    Auditory-Verbal Therapy (AVT) is an effective early intervention for children with hearing loss. The Hear and Say Centre in Brisbane offers AVT sessions to families soon after diagnosis, and about 20% of the families in Queensland participate via PC-based videoconferencing (Skype). Parent and therapist satisfaction with the telemedicine sessions was examined by questionnaire. All families had been enrolled in the telemedicine AVT programme for at least six months. Their average distance from the Hear and Say Centre was 600 km. Questionnaires were completed by 13 of the 17 parents and all five therapists. Parents and therapists generally expressed high satisfaction in the majority of the sections of the questionnaire, e.g. most rated the audio and video quality as good or excellent. All parents felt comfortable or as comfortable as face-to-face when discussing matters with the therapist online, and were satisfied or as satisfied as face-to-face with their level and their child's level of interaction/rapport with the therapist. All therapists were satisfied or very satisfied with the telemedicine AVT programme. The results demonstrate the potential of telemedicine service delivery for teaching listening and spoken language to children with hearing loss in rural and remote areas of Australia.

  11. Identification of four class emotion from Indonesian spoken language using acoustic and lexical features

    Science.gov (United States)

    Kasyidi, Fatan; Puji Lestari, Dessi

    2018-03-01

    One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.

  12. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    Science.gov (United States)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners

  13. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates.

    Science.gov (United States)

    Petkov, Christopher I; Jarvis, Erich D

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.

  14. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates

    Science.gov (United States)

    Petkov, Christopher I.; Jarvis, Erich D.

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species. PMID:22912615

  15. General language performance measures in spoken and written narrative and expository discourse of school-age children with language learning disabilities.

    Science.gov (United States)

    Scott, C M; Windsor, J

    2000-04-01

    Language performance in naturalistic contexts can be characterized by general measures of productivity, fluency, lexical diversity, and grammatical complexity and accuracy. The use of such measures as indices of language impairment in older children is open to questions of method and interpretation. This study evaluated the extent to which 10 general language performance measures (GLPM) differentiated school-age children with language learning disabilities (LLD) from chronological-age (CA) and language-age (LA) peers. Children produced both spoken and written summaries of two educational videotapes that provided models of either narrative or expository (informational) discourse. Productivity measures, including total T-units, total words, and words per minute, were significantly lower for children with LLD than for CA children. Fluency (percent T-units with mazes) and lexical diversity (number of different words) measures were similar for all children. Grammatical complexity as measured by words per T-unit was significantly lower for LLD children. However, there was no difference among groups for clauses per T-unit. The only measure that distinguished children with LLD from both CA and LA peers was the extent of grammatical error. Effects of discourse genre and modality were consistent across groups. Compared to narratives, expository summaries were shorter, less fluent (spoken versions), more complex (words per T-unit), and more error prone. Written summaries were shorter and had more errors than spoken versions. For many LLD and LA children, expository writing was exceedingly difficult. Implications for accounts of language impairment in older children are discussed.

  16. THE INFLUENCE OF LANGUAGE USE AND LANGUAGE ATTITUDE ON THE MAINTENANCE OF COMMUNITY LANGUAGES SPOKEN BY MIGRANT STUDENTS

    Directory of Open Access Journals (Sweden)

    Leni Amalia Suek

    2014-05-01

    Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.

  17. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals.

    Science.gov (United States)

    Marchman, Virginia A; Fernald, Anne; Hurtado, Nereyda

    2010-09-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children's facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children's ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language.

  18. The role of planum temporale in processing accent variation in spoken language comprehension.

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a

  19. EEG decoding of spoken words in bilingual listeners: from words to language invariant semantic-conceptual representations

    Directory of Open Access Journals (Sweden)

    João Mendonça Correia

    2015-02-01

    Full Text Available Spoken word recognition and production require fast transformations between acoustic, phonological and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g. ‘paard’-‘horse’. Multivariate pattern analysis (MVPA was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination and generalize meaning across two languages (across-language generalization. Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in

  20. Reliability and validity of the C-BiLLT: a new instrument to assess comprehension of spoken language in young children with cerebral palsy and complex communication needs.

    Science.gov (United States)

    Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J

    2014-09-01

    In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.

  1. Distance delivery of a spoken language intervention for school-aged and adolescent boys with fragile X syndrome.

    Science.gov (United States)

    McDuffie, Andrea; Banasik, Amy; Bullard, Lauren; Nelson, Sarah; Feigles, Robyn Tempero; Hagerman, Randi; Abbeduto, Leonard

    2018-01-01

    A small randomized group design (N = 20) was used to examine a parent-implemented intervention designed to improve the spoken language skills of school-aged and adolescent boys with FXS, the leading cause of inherited intellectual disability. The intervention was implemented by speech-language pathologists who used distance video-teleconferencing to deliver the intervention. The intervention taught mothers to use a set of language facilitation strategies while interacting with their children in the context of shared story-telling. Treatment group mothers significantly improved their use of the targeted intervention strategies. Children in the treatment group increased the duration of engagement in the shared story-telling activity as well as use of utterances that maintained the topic of the story. Children also showed increases in lexical diversity, but not in grammatical complexity.

  2. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    Science.gov (United States)

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  3. KANNADA--A CULTURAL INTRODUCTION TO THE SPOKEN STYLES OF THE LANGUAGE.

    Science.gov (United States)

    KRISHNAMURTHI, M.G.; MCCORMACK, WILLIAM

    THE TWENTY GRADED UNITS IN THIS TEXT CONSTITUTE AN INTRODUCTION TO BOTH INFORMAL AND FORMAL SPOKEN KANNADA. THE FIRST TWO UNITS PRESENT THE KANNADA MATERIAL IN PHONETIC TRANSCRIPTION ONLY, WITH KANNADA SCRIPT GRADUALLY INTRODUCED FROM UNIT III ON. A TYPICAL LESSON-UNIT INCLUDES--(1) A DIALOG IN PHONETIC TRANSCRIPTION AND ENGLISH TRANSLATION, (2)…

  4. The role of planum temporale in processing accent variation in spoken language comprehension

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and

  5. Chunk Learning and the Development of Spoken Discourse in a Japanese as a Foreign Language Classroom

    Science.gov (United States)

    Taguchi, Naoko

    2007-01-01

    This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…

  6. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    Science.gov (United States)

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  7. Brain-based translation: fMRI decoding of spoken words in bilinguals reveals language-independent semantic representations in anterior temporal lobe.

    Science.gov (United States)

    Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene

    2014-01-01

    Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.

  8. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  9. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  10. Comprehension of spoken language in non-speaking children with severe cerebral palsy: an explorative study on associations with motor type and disabilities

    NARCIS (Netherlands)

    Geytenbeek, J.J.M.; Vermeulen, R.J.; Becher, J.G.; Oostrom, K.J.

    2015-01-01

    Aim: To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Method: Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic

  11. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    Science.gov (United States)

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  12. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    Science.gov (United States)

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  13. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  14. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  15. Using the readiness potential of button-press and verbal response within spoken language processing.

    Science.gov (United States)

    Jansen, Stefanie; Wesselmeier, Hendrik; de Ruiter, Jan P; Mueller, Horst M

    2014-07-30

    Even though research in turn-taking in spoken dialogues is now abundant, a typical EEG-signature associated with the anticipation of turn-ends has not yet been identified until now. The purpose of this study was to examine if readiness potentials (RP) can be used to study the anticipation of turn-ends by using it in a motoric finger movement and articulatory movement task. The goal was to determine the preconscious onset of turn-end anticipation in early, preconscious turn-end anticipation processes by the simultaneous registration of EEG measures (RP) and behavioural measures (anticipation timing accuracy, ATA). For our behavioural measures, we used both button-press and verbal response ("yes"). In the experiment, 30 subjects were asked to listen to auditorily presented utterances and press a button or utter a brief verbal response when they expected the end of the turn. During the task, a 32-channel-EEG signal was recorded. The results showed that the RPs during verbal- and button-press-responses developed similarly and had an almost identical time course: the RP signals started to develop 1170 vs. 1190 ms before the behavioural responses. Until now, turn-end anticipation is usually studied using behavioural methods, for instance by measuring the anticipation timing accuracy, which is a measurement that reflects conscious behavioural processes and is insensitive to preconscious anticipation processes. The similar time course of the recorded RP signals for both verbal- and button-press responses provide evidence for the validity of using RPs as an online marker for response preparation in turn-taking and spoken dialogue research. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Quarterly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from fiscal...

  17. Yearly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2016 Onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from federal...

  18. Shy and Soft-Spoken: Shyness, Pragmatic Language, and Socioemotional Adjustment in Early Childhood

    Science.gov (United States)

    Coplan, Robert J.; Weeks, Murray

    2009-01-01

    The goal of this study was to examine the moderating role of pragmatic language in the relations between shyness and indices of socio-emotional adjustment in an unselected sample of early elementary school children. In particular, we sought to explore whether pragmatic language played a protective role for shy children. Participants were n = 167…

  19. Propositional Density in Spoken and Written Language of Czech-Speaking Patients with Mild Cognitive Impairment

    Science.gov (United States)

    Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán

    2016-01-01

    Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…

  20. How do doctors learn the spoken language of their patients? | Pfaff ...

    African Journals Online (AJOL)

    Methods. Qualitative individual interviews were conducted with seven doctors who had successfully learned the language of their patients, to determine their experiences and how they had succeeded. Results. All seven doctors used a combination of methods to learn the language. Listening was found to be very important, ...

  1. Talk or Chat? Chatroom and Spoken Interaction in a Language Classroom

    Science.gov (United States)

    Hamano-Bunce, Douglas

    2011-01-01

    This paper describes a study comparing chatroom and face-to-face oral interaction for the purposes of language learning in a tertiary classroom in the United Arab Emirates. It uses transcripts analysed for Language Related Episodes, collaborative dialogues, thought to be externally observable examples of noticing in action. The analysis is…

  2. Task-Oriented Spoken Dialog System for Second-Language Learning

    Science.gov (United States)

    Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…

  3. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception.

    Science.gov (United States)

    Liebenthal, Einat; Silbersweig, David A; Stern, Emily

    2016-01-01

    Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.

  4. Bilateral Versus Unilateral Cochlear Implants in Children: A Study of Spoken Language Outcomes

    Science.gov (United States)

    Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of

  5. Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

    Science.gov (United States)

    Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R

    This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant

  6. Yearly Data for Spoken Language Preferences of Supplemental Security Income (Blind & Disabled) (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...

  7. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Aged Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits for fiscal years 2014 -...

  8. Quarterly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for fiscal years 2014...

  9. Yearly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...

  10. Yearly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits from federal fiscal year...

  11. Inter Lingual Influences of Turkish, Serbian and English Dialect in Spoken Gjakovar's Language

    OpenAIRE

    Sindorela Doli Kryeziu; Gentiana Muhaxhiri

    2014-01-01

    In this paper we have tried to clarify the problems that are faced "gege dialect's'' speakers in Gjakova who have presented more or less difficulties in acquiring the standard. Standard language is part of the people language, but increased to the norm according the scientific criteria. From this observation it comes obliviously understandable that standard variation and dialectal variant are inseparable and, as such, they represent a macro linguistic unity. As part of this macro linguistic u...

  12. Project ASPIRE: Spoken Language Intervention Curriculum for Parents of Low-socioeconomic Status and Their Deaf and Hard-of-Hearing Children.

    Science.gov (United States)

    Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen

    2016-02-01

    To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.

  13. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    Science.gov (United States)

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  14. SPOKEN CORPORA: RATIONALE AND APPLICATION

    Directory of Open Access Journals (Sweden)

    John Newman

    2008-12-01

    Full Text Available Despite the abundance of electronic corpora now available to researchers, corpora of natural speech are still relatively rare and relatively costly. This paper suggests reasons why spoken corpora are needed, despite the formidable problems of construction. The multiple purposes of such corpora and the involvement of very different kinds of language communities in such projects mean that there is no one single blueprint for the design, markup, and distribution of spoken corpora. A number of different spoken corpora are reviewed to illustrate a range of possibilities for the construction of spoken corpora.

  15. Cross-Sensory Correspondences and Symbolism in Spoken and Written Language

    Science.gov (United States)

    Walker, Peter

    2016-01-01

    Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…

  16. Changes to English as an Additional Language Writers' Research Articles: From Spoken to Written Register

    Science.gov (United States)

    Koyalan, Aylin; Mumford, Simon

    2011-01-01

    The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…

  17. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    Science.gov (United States)

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  18. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    Science.gov (United States)

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  19. Primary Spoken Language and Neuraxial Labor Analgesia Use Among Hispanic Medicaid Recipients.

    Science.gov (United States)

    Toledo, Paloma; Eosakul, Stanley T; Grobman, William A; Feinglass, Joe; Hasnain-Wynia, Romana

    2016-01-01

    Hispanic women are less likely than non-Hispanic Caucasian women to use neuraxial labor analgesia. It is unknown whether there is a disparity in anticipated or actual use of neuraxial labor analgesia among Hispanic women based on primary language (English versus Spanish). In this 3-year retrospective, single-institution, cross-sectional study, we extracted electronic medical record data on Hispanic nulliparous with vaginal deliveries who were insured by Medicaid. On admission, patients self-identified their primary language and anticipated analgesic use for labor. Extracted data included age, marital status, labor type, delivery provider (obstetrician or midwife), and anticipated and actual analgesic use. Household income was estimated from census data geocoded by zip code. Multivariable logistic regression models were estimated for anticipated and actual neuraxial analgesia use. Among 932 Hispanic women, 182 were self-identified as primary Spanish speakers. Spanish-speaking Hispanic women were less likely to anticipate and use neuraxial anesthesia than English-speaking women. After controlling for confounders, there was an association between primary language and anticipated neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women, 0.70; 97.5% confidence interval, 0.53-0.92). Similarly, there was an association between language and neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women 0.88; 97.5% confidence interval, 0.78-0.99). The use of a midwife compared with an obstetrician also decreased the likelihood of both anticipating and using neuraxial analgesia. A language-based disparity was found in neuraxial labor analgesia use. It is possible that there are communication barriers in knowledge or understanding of analgesic options. Further research is necessary to determine the cause of this association.

  20. Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study.

    Science.gov (United States)

    Werfel, Krystal L

    2017-10-05

    The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

  1. Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension

    Directory of Open Access Journals (Sweden)

    Brian eRiordan

    2015-05-01

    Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.

  2. Utility of spoken dialog systems

    CSIR Research Space (South Africa)

    Barnard, E

    2008-12-01

    Full Text Available The commercial successes of spoken dialog systems in the developed world provide encouragement for their use in the developing world, where speech could play a role in the dissemination of relevant information in local languages. We investigate...

  3. Oral narrative context effects on poor readers' spoken language performance: story retelling, story generation, and personal narratives.

    Science.gov (United States)

    Westerveld, Marleen F; Gillon, Gail T

    2010-04-01

    This investigation explored the effects of oral narrative elicitation context on children's spoken language performance. Oral narratives were produced by a group of 11 children with reading disability (aged between 7;11 and 9;3) and an age-matched control group of 11 children with typical reading skills in three different contexts: story retelling, story generation, and personal narratives. In the story retelling condition, the children listened to a story on tape while looking at the pictures in a book, before being asked to retell the story without the pictures. In the story generation context, the children were shown a picture containing a scene and were asked to make up their own story. Personal narratives were elicited with the help of photos and short narrative prompts. The transcripts were analysed at microstructure level on measures of verbal productivity, semantic diversity, and morphosyntax. Consistent with previous research, the results revealed no significant interactions between group and context, indicating that the two groups of children responded to the type of elicitation context in a similar way. There was a significant group effect, however, with the typical readers showing better performance overall on measures of morphosyntax and semantic diversity. There was also a significant effect of elicitation context with both groups of children producing the longest, linguistically most dense language samples in the story retelling context. Finally, the most significant differences in group performance were observed in the story retelling condition, with the typical readers outperforming the poor readers on measures of verbal productivity, number of different words, and percent complex sentences. The results from this study confirm that oral narrative samples can distinguish between good and poor readers and that the story retelling condition may be a particularly useful context for identifying strengths and weaknesses in oral narrative performance.

  4. Human inferior colliculus activity relates to individual differences in spoken language learning.

    Science.gov (United States)

    Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M

    2012-03-01

    A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.

  5. Influence of family environment on language outcomes in children with myelomeningocele.

    Science.gov (United States)

    Vachha, B; Adams, R

    2005-09-01

    Previously, our studies demonstrated language differences impacting academic performance among children with myelomeningocele and shunted hydrocephalus (MMSH). This follow-up study considers the environmental facilitators within families (achievement orientation, intellectual-cultural orientation, active recreational orientation, independence) among a cohort of children with MMSH and their relationship to language performance. Fifty-eight monolingual, English-speaking children (36 females; mean age: 10.1 years; age range: 7-16 years) with MMSH were evaluated. Exclusionary criteria were prior shunt infection; seizure or shunt malfunction within the previous 3 months; uncorrected visual or auditory impairments; prior diagnoses of mental retardation or attention deficit disorder. The Comprehensive Assessment of Spoken Language (CASL) and the Wechsler Abbreviated Scale of Intelligence (WASI) were administered individually to all participants. The CASL Measures four subsystems: lexical, syntactic, supralinguistic and pragmatic. Parents completed the Family Environment Scale (FES) questionnaire and provided background demographic information. Spearman correlation analyses and partial correlation analyses were performed. Mean intelligence scores for the MMSH group: full scale IQ 92.2 (SD = 11.9). The CASL revealed statistically significant difficulty for supralinguistic and pragmatic (or social) language tasks. FES scores fell within the average range for the group. Spearman correlation and partial correlation analyses revealed statistically significant positive relationships for the FES 'intellectual-cultural orientation' variable and performance within the four language subsystems. Socio-economic status (SES) characteristics were analyzed and did not discriminate language performance when the intellectual-cultural orientation factor was taken into account. The role of family facilitators on language skills in children with MMSH has not previously been described. The

  6. Phonological processing of rhyme in spoken language and location in sign language by deaf and hearing participants: a neurophysiological study.

    Science.gov (United States)

    Colin, C; Zuinen, T; Bayard, C; Leybaert, J

    2013-06-01

    Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  7. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  8. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    Science.gov (United States)

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  9. Children’s recall of words spoken in their first and second language:Effects of signal-to-noise ratio and reverberation time

    Directory of Open Access Journals (Sweden)

    Anders eHurtig

    2016-01-01

    Full Text Available Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1 and second-language (L2. A total of 72 children (10 years old participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 sec crossed with two different SNRs (+3 dBA and +12 dBA. Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA improved recall compared to a low SNR (+3 dBA. Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time did not interact with language.

  10. User guidelines and best practices for CASL VUQ analysis using Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hooper, Russell [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lewis, Allison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); McMahan, Jerry A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Ralph C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Brian J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-03-01

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. This manual offers Consortium for Advanced Simulation of Light Water Reactors (LWRs) (CASL) partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem. This SAND report constitutes the product of CASL milestone L3:VUQ.V&V.P8.01 and is also being released as a CASL unlimited release report with number CASL-U-2014-0038-000.

  11. Language spoken at home and the association between ethnicity and doctor-patient communication in primary care: analysis of survey data for South Asian and White British patients.

    Science.gov (United States)

    Brodie, Kara; Abel, Gary; Burt, Jenni

    2016-03-03

    To investigate if language spoken at home mediates the relationship between ethnicity and doctor-patient communication for South Asian and White British patients. We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner-patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. There was strong evidence of an association between doctor-patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0-100) than White British patients (95% CI -4.9 to -1.1, p=0.002). This difference reduced to 1.4 points (95% CI -3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI -6.4 to -0.2). South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  13. Permissive Subsorted Partial Logic in CASL

    DEFF Research Database (Denmark)

    Cerioli, Maura; Haxthausen, Anne Elisabeth; Krieg-Brückner, Bernd

    1997-01-01

    This paper presents a permissive subsorted partial logic used in the CoFI Algebraic Specification Language. In contrast to other order-sorted logics, subsorting is not modeled by set inclusions, but by injective embeddings allowing for more general models in which subtypes can have different data...

  14. Contribution of Spoken Language and Socio-Economic Background to Adolescents' Educational Achievement at Age 16 Years

    Science.gov (United States)

    Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert

    2017-01-01

    Background: Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and…

  15. How appropriate are the English language test requirements for non-UK-trained nurses? A qualitative study of spoken communication in UK hospitals.

    Science.gov (United States)

    Sedgwick, Carole; Garner, Mark

    2017-06-01

    Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical

  16. Phonological awareness development in children with and without spoken language difficulties: A 12-month longitudinal study of German-speaking pre-school children.

    Science.gov (United States)

    Schaefer, Blanca; Stackhouse, Joy; Wells, Bill

    2017-10-01

    There is strong empirical evidence that English-speaking children with spoken language difficulties (SLD) often have phonological awareness (PA) deficits. The aim of this study was to explore longitudinally if this is also true of pre-school children speaking German, a language that makes extensive use of derivational morphemes which may impact on the acquisition of different PA levels. Thirty 4-year-old children with SLD were assessed on 11 PA subtests at three points over a 12-month period and compared with 97 four-year-old typically developing (TD) children. The TD-group had a mean percentage correct of over 50% for the majority of tasks (including phoneme tasks) and their PA skills developed significantly over time. In contrast, the SLD-group improved their PA performance over time on syllable and rhyme, but not on phoneme level tasks. Group comparisons revealed that children with SLD had weaker PA skills, particularly on phoneme level tasks. The study contributes a longitudinal perspective on PA development before school entry. In line with their English-speaking peers, German-speaking children with SLD showed poorer PA skills than TD peers, indicating that the relationship between SLD and PA is similar across these two related but different languages.

  17. Distinguish Spoken English from Written English: Rich Feature Analysis

    Science.gov (United States)

    Tian, Xiufeng

    2013-01-01

    This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…

  18. Reply to David Kemmerer's "a critique of Mark D. Allen's 'the preservation of verb subcategory knowledge in a spoken language comprehension deficit'".

    Science.gov (United States)

    Allen, Mark D; Owens, Tyler E

    2008-07-01

    Allen [Allen, M. D. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. Brain and Language, 95, 255-264] presents evidence from a single patient, WBN, to motivate a theory of lexical processing and representation in which syntactic information may be encoded and retrieved independently of semantic information. In his critique, Kemmerer argues that because Allen depended entirely on preposition-based verb subcategory violations to test WBN's knowledge of correct argument structure, his results, at best, address a "strawman" theory. This argument rests on the assumption that preposition subcategory options are superficial syntactic phenomena which are not represented by argument structure proper. We demonstrate that preposition subcategory is in fact treated as semantically determined argument structure in the theories that Allen evaluated, and thus far from irrelevant. In further discussion of grammatically relevant versus irrelevant semantic features, Kemmerer offers a review of his own studies. However, due to an important design shortcoming in these experiments, we remain unconvinced. Reemphasizing the fact the Allen (2005) never claimed to rule out all semantic contributions to syntax, we propose an improvement in Kemmerer's approach that might provide more satisfactory evidence on the distinction between the kinds of relevant versus irrelevant features his studies have addressed.

  19. Age and amount of exposure to a foreign language during childhood: behavioral and ERP data on the semantic comprehension of spoken English by Japanese children.

    Science.gov (United States)

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hoshino, Takahiro; Hagiwara, Hiroko

    2011-06-01

    Children's foreign-language (FL) learning is a matter of much social as well as scientific debate. Previous behavioral research indicates that starting language learning late in life can lead to problems in phonological processing. Inadequate phonological capacity may impede lexical learning and semantic processing (phonological bottleneck hypothesis). Using both behavioral and neuroimaging data, here we examine the effects of age of first exposure (AOFE) and total hours of exposure (HOE) to English, on 350 Japanese primary-school children's semantic processing of spoken English. Children's English proficiency scores and N400 event-related brain potentials (ERPs) were analyzed in multiple regression analyses. The results showed (1) that later, rather than earlier, AOFE led to higher English proficiency and larger N400 amplitudes, when HOE was controlled for; and (2) that longer HOE led to higher English proficiency and larger N400 amplitudes, whether AOFE was controlled for or not. These data highlight the important role of amount of exposure in FL learning, and cast doubt on the view that starting FL learning earlier always produces better results. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  20. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  1. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits for fiscal...

  2. Social Security Administration - Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from fiscal...

  3. Influence Of Implantation Age On School-Age Language Performance In Pediatric Cochlear Implant Users

    Science.gov (United States)

    Tobey, Emily A.; Thal, Donna; Niparko, John K.; Eisenberg, Laurie S.; Quittner, Alexandra L.; Wang, Nae-Yuh

    2013-01-01

    Objective This study examined specific spoken language abilities of 160 children with severe-to-profound sensorineural hearing loss followed prospectively 4, 5, or 6 years after cochlear implantation. Study sample Ninety-eight children received implants before 2.5 years, and 62 children received implants between 2.5 and 5 years of age. Design Language was assessed using four subtests of the Comprehensive Assessment of Spoken Language (CASL). Standard scores were evaluated by contrasting age of implantation and follow-up test time. Results Children implanted under 2.5 years of age achieved higher standard scores than children with older ages of implantation for expressive vocabulary, expressive syntax, and pragmatic judgments. However, in both groups, some children performed more than two standard deviations below the standardization group mean, while some scored at or well above the mean. Conclusions Younger ages of implantation are associated with higher levels of performance, while later ages of implantation are associated with higher probabilities of continued language delays, particularly within subdomains of grammar and pragmatics. Longitudinal data from this cohort study demonstrate that after 6 years of implant experience, there is large variability in language outcomes associated with modifiers of rates of language learning that differ as children with implants age. PMID:23448124

  4. Fast mapping semantic features: performance of adults with normal language, history of disorders of spoken and written language, and attention deficit hyperactivity disorder on a word-learning task.

    Science.gov (United States)

    Alt, Mary; Gutmann, Michelle L

    2009-01-01

    This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.

  5. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Utah State University: Cross-Discipline Training through the Graduate Studies Program in Auditory Learning & Spoken Language

    Science.gov (United States)

    Houston, K. Todd

    2010-01-01

    Since 1946, Utah State University (USU) has offered specialized coursework in audiology and speech-language pathology, awarding the first graduate degrees in 1948. In 1965, the teacher training program in deaf education was launched. Over the years, the Department of Communicative Disorders and Deaf Education (COMD-DE) has developed a rich history…

  7. A Pilot Study of Telepractice for Teaching Listening and Spoken Language to Mandarin-Speaking Children with Congenital Hearing Loss

    Science.gov (United States)

    Chen, Pei-Hua; Liu, Ting-Wei

    2017-01-01

    Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…

  8. Inferential language use by school-aged boys with fragile X syndrome: Effects of a parent-implemented spoken language intervention.

    Science.gov (United States)

    Nelson, Sarah; McDuffie, Andrea; Banasik, Amy; Tempero Feigles, Robyn; Thurman, Angela John; Abbeduto, Leonard

    This study examined the impact of a distance-delivered parent-implemented narrative language intervention on the use of inferential language during shared storytelling by school-aged boys with fragile X syndrome, an inherited neurodevelopmental disorder. Nineteen school-aged boys with FXS and their biological mothers participated. Dyads were randomly assigned to an intervention or a treatment-as-usual comparison group. Transcripts from all pre- and post-intervention sessions were coded for child use of prompted and spontaneous inferential language coded into various categories. Children in the intervention group used more utterances that contained inferential language than the comparison group at post-intervention. Furthermore, children in the intervention group used more prompted inferential language than the comparison group at post-intervention, but there were no differences between the groups in their spontaneous use of inferential language. Additionally, children in the intervention group demonstrated increases from pre- to post-intervention in their use of most categories of inferential language. This study provides initial support for the utility of a parent-implemented language intervention for increasing the use of inferential language by school aged boys with FXS, but also suggests the need for additional treatment to encourage spontaneous use. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends

    Science.gov (United States)

    Weisberg, Jill; McCullough, Stephen; Emmorey, Karen

    2018-01-01

    Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161

  10. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Coleman, Kayla [North Carolina State Univ., Raleigh, NC (United States); Hooper, Russell W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Khuwaileh, Bassam [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lewis, Allison [North Carolina State Univ., Raleigh, NC (United States); Smith, Ralph C. [North Carolina State Univ., Raleigh, NC (United States); Swiler, Laura P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Turinsky, Paul J. [North Carolina State Univ., Raleigh, NC (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-04

    In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers. More specifically, the CASL VUQ Strategy [33] prescribes the use of Predictive Capability Maturity Model (PCMM) assessments [37]. PCMM is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated with an intended application. Exercising a computational model with the methods in Dakota will yield, in part, evidence for a predictive capability maturity model (PCMM) assessment. Table 1.1 summarizes some key predictive maturity related activities (see details in [33]), with examples of how Dakota fits in. This manual offers CASL partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.

  11. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  12. LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI

    Directory of Open Access Journals (Sweden)

    Süleyman Kaan YALÇIN (M.A.H.

    2008-12-01

    Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.

  13. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  14. Spoken and Written Communication: Are Five Vowels Enough?

    Science.gov (United States)

    Abbott, Gerry

    The comparatively small vowel inventory of Bantu languages leads young Bantu learners to produce "undifferentiations," so that, for example, the spoken forms of "hat,""hut,""heart" and "hurt" sound the same to a British ear. The two criteria for a non-native speaker's spoken performance are…

  15. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    International Nuclear Information System (INIS)

    Adams, Brian M.; Coleman, Kayla; Hooper, Russell; Khuwaileh, Bassam A.; Lewis, Allison; Smith, Ralph C.; Swiler, Laura Painton; Turinsky, Paul J.; Williams, Brian W.

    2016-01-01

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically, it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. This manual offers Consortium for Advanced Simulation of Light Water Reactors (LWRs) (CASL) partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.

  16. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Coleman, Kayla [North Carolina State Univ., Raleigh, NC (United States); Hooper, Russell [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Khuwaileh, Bassam A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lewis, Allison [North Carolina State Univ., Raleigh, NC (United States); Smith, Ralph C. [North Carolina State Univ., Raleigh, NC (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Turinsky, Paul J. [North Carolina State Univ., Raleigh, NC (United States); Williams, Brian W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-01

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically, it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. This manual offers Consortium for Advanced Simulation of Light Water Reactors (LWRs) (CASL) partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.

  17. Discourse before gender: An event-related brain potential study on the interplay of semantic and syntactic information during spoken language understanding

    NARCIS (Netherlands)

    Brown, C.M.; Berkum, J.J.A. van; Hagoort, P.

    2000-01-01

    A study is presented on the effects of discourse-semantic and lexical-syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior

  18. Acoustic noise reduction in pseudo-continuous arterial spin labeling (pCASL)

    NARCIS (Netherlands)

    van der Meer, J.N.; Heijtel, D.F.R.; van Hest, G.; Plattel, G.J.; van Osch, M.J.P.; van Someren, E.J.W.; Vanbavel, E.T.; Nederveen, A.J.

    2014-01-01

    Object: While pseudo-continuous arterial spin labeling (pCASL) is a promising imaging technique to visualize cerebral blood flow, it is also (acoustically) very loud during labeling. In this paper, we reduced the labeling loudness on our scanner by increasing the interval between the RF pulses from

  19. Modeling and simulation challenges pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)

    Science.gov (United States)

    Turinsky, Paul J.; Kothe, Douglas B.

    2016-05-01

    The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics ;core simulator; based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M

  20. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  1. Teaching Spoken Spanish

    Science.gov (United States)

    Lipski, John M.

    1976-01-01

    The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)

  2. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  3. Development of a New 47-Group Library for the CASL Neutronics Simulators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Mark L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wiarda, Dorothea [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Godfrey, Andrew T [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACT library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.

  4. CASL L1 Milestone report: CASL.P4.01, sensitivity and uncertainty analysis for CIPS with VIPRE-W and BOA

    International Nuclear Information System (INIS)

    Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.

    2011-01-01

    The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. This report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.

  5. CASL L1 Milestone report : CASL.P4.01, sensitivity and uncertainty analysis for CIPS with VIPRE-W and BOA.

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Yixing (Westinghouse Electric Company LLC, Cranberry Township, PA); Adams, Brian M.; Secker, Jeffrey R. (Westinghouse Electric Company LLC, Cranberry Township, PA)

    2011-12-01

    The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. This report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.

  6. Language Non-Selective Activation of Orthography during Spoken Word Processing in Hindi-English Sequential Bilinguals: An Eye Tracking Visual World Study

    Science.gov (United States)

    Mishra, Ramesh Kumar; Singh, Niharika

    2014-01-01

    Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…

  7. Accessing the spoken word

    OpenAIRE

    Goldman, Jerry; Renals, Steve; Bird, Steven; de Jong, Franciska; Federico, Marcello; Fleischhauer, Carl; Kornbluh, Mark; Lamel, Lori; Oard, Douglas W; Stewart, Claire; Wright, Richard

    2005-01-01

    Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs. This paper outlines the major issues in the field, reviews the current state of technology, examines the rapidly changing policy issues relating to privacy and copyright, and presents issues relati...

  8. The language spoken at home and disparities in medical and dental health, access to care, and use of services in US children.

    Science.gov (United States)

    Flores, Glenn; Tomany-Korman, Sandra C

    2008-06-01

    Fifty-five million Americans speak a non-English primary language at home, but little is known about health disparities for children in non-English-primary-language households. Our study objective was to examine whether disparities in medical and dental health, access to care, and use of services exist for children in non-English-primary-language households. The National Survey of Childhood Health was a telephone survey in 2003-2004 of a nationwide sample of parents of 102 353 children 0 to 17 years old. Disparities in medical and oral health and health care were examined for children in a non-English-primary-language household compared with children in English- primary-language households, both in bivariate analyses and in multivariable analyses that adjusted for 8 covariates (child's age, race/ethnicity, and medical or dental insurance coverage, caregiver's highest educational attainment and employment status, number of children and adults in the household, and poverty status). Children in non-English-primary-language households were significantly more likely than children in English-primary-language households to be poor (42% vs 13%) and Latino or Asian/Pacific Islander. Significantly higher proportions of children in non-English-primary-language households were not in excellent/very good health (43% vs 12%), were overweight/at risk for overweight (48% vs 39%), had teeth in fair/poor condition (27% vs 7%), and were uninsured (27% vs 6%), sporadically insured (20% vs 10%), and lacked dental insurance (39% vs 20%). Children in non-English-primary-language households more often had no usual source of medical care (38% vs 13%), made no medical (27% vs 12%) or preventive dental (14% vs 6%) visits in the previous year, and had problems obtaining specialty care (40% vs 23%). Latino and Asian children in non-English-primary-language households had several unique disparities compared with white children in non-English-primary-language households. Almost all disparities

  9. Sign language: an international handbook

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Woll, B.

    2012-01-01

    Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of

  10. Measurement of brain perfusion in newborns: Pulsed arterial spin labeling (PASL versus pseudo-continuous arterial spin labeling (pCASL

    Directory of Open Access Journals (Sweden)

    Elodie Boudes

    2014-01-01

    Conclusion: This study demonstrates that both ASL methods are feasible to assess brain perfusion in healthy and sick newborns. However, pCASL might be a better choice over PASL in newborns, as pCASL perfusion maps had a superior image quality that allowed a more detailed identification of the different brain structures.

  11. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  12. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  13. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  14. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  15. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  16. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  17. Flipper: An Information State Component for Spoken Dialogue Systems

    NARCIS (Netherlands)

    ter Maat, Mark; Heylen, Dirk K.J.; Vilhjálmsson, Hannes; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn

    This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.

  18. Pair Counting to Improve Grammar and Spoken Fluency

    Science.gov (United States)

    Hanson, Stephanie

    2017-01-01

    English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…

  19. The Link between Vocabulary Knowledge and Spoken L2 Fluency

    Science.gov (United States)

    Hilton, Heather

    2008-01-01

    In spite of the vast numbers of articles devoted to vocabulary acquisition in a foreign language, few studies address the contribution of lexical knowledge to spoken fluency. The present article begins with basic definitions of the temporal characteristics of oral fluency, summarizing L1 research over several decades, and then presents fluency…

  20. Phonological Interference in the Spoken English Performance of the ...

    African Journals Online (AJOL)

    This paper sets out to examine the phonological interference in the spoken English performance of the Izon speaker. It emphasizes that the level of interference is not just as a result of the systemic differences that exist between both language systems (Izon and English) but also as a result of the interlanguage factors such ...

  1. An Analysis of Spoken Grammar: The Case for Production

    Science.gov (United States)

    Mumford, Simon

    2009-01-01

    Corpus-based grammars, notably "Cambridge Grammar of English," give explicit information on the forms and use of native-speaker grammar, including spoken grammar. Native-speaker norms as a necessary goal in language teaching are contested by supporters of English as a Lingua Franca (ELF); however, this article argues for the inclusion of selected…

  2. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  3. Spoken Persuasive Discourse Abilities of Adolescents with Acquired Brain Injury

    Science.gov (United States)

    Moran, Catherine; Kirk, Cecilia; Powell, Emma

    2012-01-01

    Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…

  4. SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI

    Energy Technology Data Exchange (ETDEWEB)

    Jen, M [Chang Gung University, Taoyuan City, Taiwan (China); Yan, F; Tseng, Y; Chen, C [Taipei Medical University - Shuang Ho Hospital, Ministry of Health and Welf, New Taipei City, Taiwan (China); Lin, C [GE Healthcare, Taiwan (China); GE Healthcare China, Beijing (China); Liu, H [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtraction of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.

  5. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...

  6. Do children with specific language impairment and autism spectrum disorders benefit from the presence of orthography when learning new spoken words?

    Science.gov (United States)

    Ricketts, Jessie; Dockrell, Julie E; Patel, Nita; Charman, Tony; Lindsay, Geoff

    2015-06-01

    This experiment investigated whether children with specific language impairment (SLI), children with autism spectrum disorders (ASD), and typically developing children benefit from the incidental presence of orthography when learning new oral vocabulary items. Children with SLI, children with ASD, and typically developing children (n=27 per group) between 8 and 13 years of age were matched in triplets for age and nonverbal reasoning. Participants were taught 12 mappings between novel phonological strings and referents; half of these mappings were trained with orthography present and half were trained with orthography absent. Groups did not differ on the ability to learn new oral vocabulary, although there was some indication that children with ASD were slower than controls to identify newly learned items. During training, the ASD, SLI, and typically developing groups benefited from orthography to the same extent. In supplementary analyses, children with SLI were matched in pairs to an additional control group of younger typically developing children for nonword reading. Compared with younger controls, children with SLI showed equivalent oral vocabulary acquisition and benefit from orthography during training. Our findings are consistent with current theoretical accounts of how lexical entries are acquired and replicate previous studies that have shown orthographic facilitation for vocabulary acquisition in typically developing children and children with ASD. We demonstrate this effect in SLI for the first time. The study provides evidence that the presence of orthographic cues can support oral vocabulary acquisition, motivating intervention approaches (as well as standard classroom teaching) that emphasize the orthographic form. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    Science.gov (United States)

    2016-05-03

    resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within

  8. Spoken English and the question of grammar: the role of the functional model

    OpenAIRE

    Coffin, Caroline

    2003-01-01

    Given the nature of spoken text, the first requirement of an appropriate grammar is its ability to account for stretches of language (including recurring types of text or genres), in addition to clause level patterns. Second, the grammatical model needs to be part of a wider theory of language that recognises the functional nature and educational purposes of spoken text. The model also needs to be designed in a\\ud sufficiently comprehensive way so as to account for grammatical forms in speech...

  9. CONVERTING RETRIEVED SPOKEN DOCUMENTS INTO TEXT USING AN AUTO ASSOCIATIVE NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2016-06-01

    Full Text Available This paper frames a novel methodology for spoken document information retrieval to the spontaneous speech corpora and converting the retrieved document into the corresponding language text. The proposed work involves the three major areas namely spoken keyword detection, spoken document retrieval and automatic speech recognition. The keyword spotting is concerned with the exploit of the distribution capturing capability of the Auto Associative Neural Network (AANN for spoken keyword detection. It involves sliding a frame-based keyword template along the audio documents and by means of confidence score acquired from the normalized squared error of AANN to search for a match. This work benevolences a new spoken keyword spotting algorithm. Based on the match the spoken documents are retrieved and clustered together. In speech recognition step, the retrieved documents are converted into the corresponding language text using the AANN classifier. The experiments are conducted using the Dravidian language database and the results recommend that the proposed method is promising for retrieving the relevant documents of a spoken query as a key and transform it into the corresponding language.

  10. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Coleman, Kayla [North Carolina State Univ., Raleigh, NC (United States); Gilkey, Lindsay N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Natalie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hooper, Russell [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Khuwaileh, Bassam A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lewis, Allison [North Carolina State Univ., Raleigh, NC (United States); Maupin, Kathryn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Ralph C. [North Carolina State Univ., Raleigh, NC (United States); Swiler, Laura P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Turinsky, Paul J. [North Carolina State Univ., Raleigh, NC (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, risk analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.

  11. User-Centred Design for Chinese-Oriented Spoken English Learning System

    Science.gov (United States)

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  12. Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery

    Directory of Open Access Journals (Sweden)

    Andrea Hudáková

    2017-11-01

    Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.

  13. Directionality Effects in Simultaneous Language Interpreting: The Case of Sign Language Interpreters in the Netherlands

    Science.gov (United States)

    van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…

  14. Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.

    Science.gov (United States)

    Brimo, Danielle; Lund, Emily; Sapp, Alysha

    2017-12-18

    Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below

  15. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  16. Developing a corpus of spoken language variability

    Science.gov (United States)

    Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford

    2003-10-01

    We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.

  17. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  18. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  19. Correlative Conjunctions in Spoken Texts

    Czech Academy of Sciences Publication Activity Database

    Poukarová, Petra

    2017-01-01

    Roč. 68, č. 2 (2017), s. 305-315 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : correlative conjunctions * spoken Czech * cohesion Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf

  20. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  1. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  2. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    International Nuclear Information System (INIS)

    Brown, C.S.; Zhang, Hongbin

    2016-01-01

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis. A 2 × 2 fuel assembly model was developed and simulated by VERA-CS, and uncertainty quantification and sensitivity analysis were performed with fourteen uncertain input parameters. The minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surface temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. Parameters used as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.

  3. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers With Down Syndrome.

    Science.gov (United States)

    Yoder, Paul J; Woynaroski, Tiffany; Fey, Marc E; Warren, Steven F; Gardner, Elizabeth

    2015-07-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only the participants with DS, we found that more therapy led to larger spoken vocabularies at posttreatment because it increased children's canonical syllabic communication and receptive vocabulary growth early in the treatment phase.

  4. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  5. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  6. Serbian heritage language schools in the Netherlands through the eyes of the parents

    NARCIS (Netherlands)

    Palmen, Andrej

    It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the

  7. Linguistic adaptations during spoken and multimodal error resolution.

    Science.gov (United States)

    Oviatt, S; Bernard, J; Levow, G A

    1998-01-01

    Fragile error handling in recognition-based systems is a major problem that degrades their performance, frustrates users, and limits commercial potential. The aim of the present research was to analyze the types and magnitude of linguistic adaptation that occur during spoken and multimodal human-computer error resolution. A semiautomatic simulation method with a novel error-generation capability was used to collect samples of users' spoken and pen-based input immediately before and after recognition errors, and at different spiral depths in terms of the number of repetitions needed to resolve an error. When correcting persistent recognition errors, results revealed that users adapt their speech and language in three qualitatively different ways. First, they increase linguistic contrast through alternation of input modes and lexical content over repeated correction attempts. Second, when correcting with verbatim speech, they increase hyperarticulation by lengthening speech segments and pauses, and increasing the use of final falling contours. Third, when they hyperarticulate, users simultaneously suppress linguistic variability in their speech signal's amplitude and fundamental frequency. These findings are discussed from the perspective of enhancement of linguistic intelligibility. Implications are also discussed for corroboration and generalization of the Computer-elicited Hyperarticulate Adaptation Model (CHAM), and for improved error handling capabilities in next-generation spoken language and multimodal systems.

  8. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  9. Some Specific CASL Requirements for Advanced Multiphase Flow Simulation of Light Water Reactors

    Energy Technology Data Exchange (ETDEWEB)

    R. A. Berry

    2010-11-01

    Because of the diversity of physical phenomena occuring in boiling, flashing, and bubble collapse, and of the length and time scales of LWR systems, it is imperative that the models have the following features: • Both vapor and liquid phases (and noncondensible phases, if present) must be treated as compressible. • Models must be mathematically and numerically well-posed. • The models methodology must be multi-scale. A fundamental derivation of the multiphase governing equation system, that should be used as a basis for advanced multiphase modeling in LWR coolant systems, is given in the Appendix using the ensemble averaging method. The remainder of this work focuses specifically on the compressible, well-posed, and multi-scale requirements of advanced simulation methods for these LWR coolant systems, because without these are the most fundamental aspects, without which widespread advancement cannot be claimed. Because of the expense of developing multiple special-purpose codes and the inherent inability to couple information from the multiple, separate length- and time-scales, efforts within CASL should be focused toward development of a multi-scale approaches to solve those multiphase flow problems relevant to LWR design and safety analysis. Efforts should be aimed at developing well-designed unified physical/mathematical and high-resolution numerical models for compressible, all-speed multiphase flows spanning: (1) Well-posed general mixture level (true multiphase) models for fast transient situations and safety analysis, (2) DNS (Direct Numerical Simulation)-like models to resolve interface level phenmena like flashing and boiling flows, and critical heat flux determination (necessarily including conjugate heat transfer), and (3) Multi-scale methods to resolve both (1) and (2) automatically, depending upon specified mesh resolution, and to couple different flow models (single-phase, multiphase with several velocities and pressures, multiphase with single

  10. How Do Raters Judge Spoken Vocabulary?

    Science.gov (United States)

    Li, Hui

    2016-01-01

    The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…

  11. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  12. Language discrimination by Java sparrows.

    Science.gov (United States)

    Watanabe, Shigeru; Yamamoto, Erico; Uozumi, Midori

    2006-07-01

    Java sparrows (Padda oryzivora) were trained to discriminate English from Chinese spoken by a bilingual speaker. They could learn discrimination and showed generalization to new sentences spoken by the same speaker and those spoken by a new speaker. Thus, the birds distinguished between English and Chinese. Although auditory cues for the discrimination were not specified, this is the first evidence that non-mammalian species can discriminate human languages.

  13. Digital Language Death

    Science.gov (United States)

    Kornai, András

    2013-01-01

    Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559

  14. Digital language death.

    Directory of Open Access Journals (Sweden)

    András Kornai

    Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.

  15. Multiclausal Utterances Aren't Just for Big Kids: A Framework for Analysis of Complex Syntax Production in Spoken Language of Preschool- and Early School-Age Children

    Science.gov (United States)

    Arndt, Karen Barako; Schuele, C. Melanie

    2013-01-01

    Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…

  16. Language Development in Children with Language Disorders: An Introduction to Skinner's Verbal Behavior and the Techniques for Initial Language Acquisition

    Science.gov (United States)

    Casey, Laura Baylot; Bicard, David F.

    2009-01-01

    Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…

  17. Investigating L2 Spoken English through the Role Play Learner Corpus

    Science.gov (United States)

    Nava, Andrea; Pedrazzini, Luciana

    2011-01-01

    We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…

  18. Between Syntax and Pragmatics: The Causal Conjunction Protože in Spoken and Written Czech

    Czech Academy of Sciences Publication Activity Database

    Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra

    -, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf

  19. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers with Down Syndrome

    Science.gov (United States)

    Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth

    2015-01-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…

  20. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  1. Monitoring the Performance of Human and Automated Scores for Spoken Responses

    Science.gov (United States)

    Wang, Zhen; Zechner, Klaus; Sun, Yu

    2018-01-01

    As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…

  2. Complex sentences in sign languages: Modality, typology, discourse

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Pfau, R.; Steinbach, M.; Herrmann, A.

    2016-01-01

    Sign language grammars, just like spoken language grammars, generally provide various means to generate different kinds of complex syntactic structures including subordination of complement clauses, adverbial clauses, or relative clauses. Studies on various sign languages have revealed that sign

  3. Nuffield Early Language Intervention: Evaluation Report and Executive Summary

    Science.gov (United States)

    Sibieta, Luke; Kotecha, Mehul; Skipp, Amy

    2016-01-01

    The Nuffield Early Language Intervention is designed to improve the spoken language ability of children during the transition from nursery to primary school. It is targeted at children with relatively poor spoken language skills. Three sessions per week are delivered to groups of two to four children starting in the final term of nursery and…

  4. The languages of the world

    National Research Council Canada - National Science Library

    Katzner, Kenneth

    2002-01-01

    ... on populations and the numbers of people speaking each language. Features include: * * * * * nearly 600 languages identified as to where they are spoken and the family to which they belong over 200 languages individually described, with sample passages and English translation fascinating insights into the history and development of individual languages a...

  5. Information Structure in Sign Languages

    NARCIS (Netherlands)

    Kimmelman, V.; Pfau, R.; Féry, C.; Ishihara, S.

    2016-01-01

    This chapter demonstrates that the Information Structure notions Topic and Focus are relevant for sign languages, just as they are for spoken languages. Data from various sign languages reveal that, across sign languages, Information Structure is encoded by syntactic and prosodic strategies, often

  6. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.

    2014-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  7. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.

    2015-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  8. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  9. Language

    DEFF Research Database (Denmark)

    Sanden, Guro Refsum

    2016-01-01

    Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...

  10. Guest Comment: Universal Language Requirement.

    Science.gov (United States)

    Sherwood, Bruce Arne

    1979-01-01

    Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)

  11. Moving conceptualizations of language and literacy in SLA

    DEFF Research Database (Denmark)

    Laursen, Helle Pia

    and conceptualizations of language and literacy in research on (second) language acquisition. When examining children’s first language acquisition, spoken language has been the primary concern in scholarship: a child acquires oral language first and written language follows later, i.e. language precedes literacy....... On the other hand, many second or foreign language learners learn mostly through written language or learn spoken and written language at the same time. Thus the connections between spoken and written (and visual) modalities, i.e. between language and literacy, are complex in research on language acquisition......Moving conceptualizations of language and literacy in SLA In this colloquium, we aim to problematize the concepts of language and literacy in the field that is termed “second language” research and seek ways to critically connect the terms. When considering current day language use for example...

  12. Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/

    Directory of Open Access Journals (Sweden)

    Socorro Cláudia Tavares de Sousa

    2009-01-01

    Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.

  13. Book review. Neurolinguistics. An Introduction to Spoken Language Processing and its Disorders, John Ingram. Cambridge University Press, Cambridge (Cambridge Textbooks in Linguistics) (2007). xxi + 420 pp., ISBN 978-0-521-79640-8 (pb)

    OpenAIRE

    Schiller, N.O.

    2009-01-01

    The present textbook is one of the few recent textbooks in the area of neurolinguistics and will be welcomed by teachers of neurolinguistic courses as well as researchers interested in the topic. Neurolinguistics is a huge area, and the boundaries between psycho- and neurolinguistics are not sharp. Often the term neurolinguistics is used to refer to research involving neuropsychological patients suffering from some sort of language disorder or impairment. Also, the term neuro- rather than psy...

  14. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  15. Inuit Sign Language: a contribution to sign language typology

    NARCIS (Netherlands)

    Schuit, J.; Baker, A.; Pfau, R.

    2011-01-01

    Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different

  16. Implications of Hegel's Theories of Language on Second Language Teaching

    Science.gov (United States)

    Wu, Manfred

    2016-01-01

    This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…

  17. Linguistic Landscape and Minority Languages

    Science.gov (United States)

    Cenoz, Jasone; Gorter, Durk

    2006-01-01

    This paper focuses on the linguistic landscape of two streets in two multilingual cities in Friesland (Netherlands) and the Basque Country (Spain) where a minority language is spoken, Basque or Frisian. The paper analyses the use of the minority language (Basque or Frisian), the state language (Spanish or Dutch) and English as an international…

  18. Language Use of Frisian Bilingual Teenagers on Social Media

    NARCIS (Netherlands)

    Jongbloed-Faber, L.; Van de Velde, H.; van der Meer, C.; Klinkenberg, E.L.

    2016-01-01

    This paper explores the use of Frisian, a minority language spoken in the Dutch province of Fryslân, on social media by Frisian teenagers. Frisian is the mother tongue of 54% of the 650,000 inhabitants and is predominantly a spoken language: 64% of the Frisian population can speak it well, while

  19. Predicting user mental states in spoken dialogue systems

    Science.gov (United States)

    Callejas, Zoraida; Griol, David; López-Cózar, Ramón

    2011-12-01

    In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.

  20. Predicting user mental states in spoken dialogue systems

    Directory of Open Access Journals (Sweden)

    Griol David

    2011-01-01

    Full Text Available Abstract In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.

  1. Artfulness in Young Children's Spoken Narratives

    Science.gov (United States)

    Glenn-Applegate, Katherine; Breit-Smith, Allison; Justice, Laura M.; Piasta, Shayne B.

    2010-01-01

    Research Findings: Artfulness is rarely considered as an indicator of quality in young children's spoken narratives. Although some studies have examined artfulness in the narratives of children 5 and older, no studies to date have focused on the artfulness of preschoolers' oral narratives. This study examined the artfulness of fictional spoken…

  2. Vowel and Consonant Replacements in the Spoken French of Ijebu Undergraduate French Learners in Selected Universities in South West of Nigeria

    Directory of Open Access Journals (Sweden)

    Iyiola Amos Damilare

    2015-04-01

    Full Text Available Substitution is a phonological process in language. Existing studies have examined deletion in several languages and dialects with less attention paid to the spoken French of Ijebu Undergraduates. This article therefore examined substitution as a dominant phenomenon in the spoken French of thirty-four Ijebu Undergraduate French Learners (IUFLs in Selected Universities in South West of Nigeria with a view to establishing the dominance of substitution in the spoken French of IUFLs. The data collection was through tape-recording of participants’ production of 30 sentences containing both French vowel and consonant sounds. The results revealed inappropriate replacement of vowel and consonant in the medial and final positions in the spoken French of IUFLs.

  3. A Study on Motivation and Strategy Use of Bangladeshi University Students to Learn Spoken English

    OpenAIRE

    Mst. Moriam, Quadir

    2008-01-01

    This study discusses motivation and strategy use of university students to learn spoken English in Bangladesh. A group of 355 (187 males and 168 females) university students participated in this investigation. To measure learners' degree of motivation a modified version of questionnaire used by Schmidt et al. (1996) was administered. Participants reported their strategy use on a modified version of SILL, the Strategy Inventory for Language Learning, version 7.0 (Oxford, 1990). In order to fin...

  4. Parental mode of communication is essential for speech and language outcomes in cochlear implanted children

    DEFF Research Database (Denmark)

    Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina

    2010-01-01

    The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....

  5. The grammaticalization of gestures in sign languages

    NARCIS (Netherlands)

    van Loon, E.; Pfau, R.; Steinbach, M.; Müller, C.; Cienki, A.; Fricke, E.; Ladewig, S.H.; McNeill, D.; Bressem, J.

    2014-01-01

    Recent studies on grammaticalization in sign languages have shown that, for the most part, the grammaticalization paths identified in sign languages parallel those previously described for spoken languages. Hence, the general principles of grammaticalization do not depend on the modality of language

  6. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script.

    Science.gov (United States)

    Zhang, Qingfang; Wang, Cheng

    2014-01-01

    The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  7. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  8. Spoken sentence production in college students with dyslexia: working memory and vocabulary effects.

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J P

    2017-11-21

    Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.

  9. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  10. Fourth International Workshop on Spoken Dialog Systems

    CERN Document Server

    Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice

    2014-01-01

    These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.

  11. Dust, a spoken word poem by Guante

    Directory of Open Access Journals (Sweden)

    Kyle Tran Myhre

    2017-06-01

    Full Text Available In "Dust," spoken word poet Kyle "Guante" Tran Myhre crafts a multi-vocal exploration of the connections between the internment of Japanese Americans during World War II and the current struggles against xenophobia in general and Islamophobia specifically. Weaving together personal narrative, quotes from multiple voices, and "verse journalism" (a term coined by Gwendolyn Brooks, the poem seeks to bridge past and present in order to inform a more just future.

  12. Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language

    Science.gov (United States)

    Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary

    2015-01-01

    Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…

  13. Family Language Policy and School Language Choice: Pathways to Bilingualism and Multilingualism in a Canadian Context

    Science.gov (United States)

    Slavkov, Nikolay

    2017-01-01

    This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…

  14. Micro Language Planning and Cultural Renaissance in Botswana

    Science.gov (United States)

    Alimi, Modupe M.

    2016-01-01

    Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…

  15. CASL VMA FY16 Milestone Report (L3:VMA.VUQ.P13.07) Westinghouse Mixing with COBRA-TF

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, Natalie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-30

    COBRA-TF (CTF) is a low-resolution code currently maintained as CASL's subchannel analysis tool. CTF operates as a two-phase, compressible code over a mesh comprised of subchannels and axial discretized nodes. In part because CTF is a low-resolution code, simulation run time is not computationally expensive, only on the order of minutes. Hi-resolution codes such as STAR-CCM+ can be used to train lower-fidelity codes such as CTF. Unlike STAR-CCM+, CTF has no turbulence model, only a two-phase turbulent mixing coefficient, β. β can be set to a constant value or calculated in terms of Reynolds number using an empirical correlation. Results from STAR-CCM+ can be used to inform the appropriate value of β. Once β is calibrated, CTF runs can be an inexpensive alternative to costly STAR-CCM+ runs for scoping analyses. Based on the results of CTF runs, STAR-CCM+ can be run for specific parameters of interest. CASL areas of application are CIPS for single phase analysis and DNB-CTF for two-phase analysis.

  16. The Lightening Veil: Language Revitalization in Wales

    Science.gov (United States)

    Williams, Colin H.

    2014-01-01

    The Welsh language, which is indigenous to Wales, is one of six Celtic languages. It is spoken by 562,000 speakers, 19% of the population of Wales, according to the 2011 U.K. Census, and it is estimated that it is spoken by a further 200,000 residents elsewhere in the United Kingdom. No exact figures exist for the undoubted thousands of other…

  17. Effects of speech clarity on recognition memory for spoken sentences.

    Science.gov (United States)

    Van Engen, Kristin J; Chandrasekaran, Bharath; Smiljanic, Rajka

    2012-01-01

    Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.

  18. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  19. At grammatical faculty of language, flies outsmart men.

    Science.gov (United States)

    Stoop, Ruedi; Nüesch, Patrick; Stoop, Ralph Lukas; Bunimovich, Leonid A

    2013-01-01

    Using a symbolic dynamics and a surrogate data approach, we show that the language exhibited by common fruit flies Drosophila ('D.') during courtship is as grammatically complex as the most complex human-spoken modern languages. This finding emerges from the study of fifty high-speed courtship videos (generally of several minutes duration) that were visually frame-by-frame dissected into 37 fundamental behavioral elements. From the symbolic dynamics of these elements, the courtship-generating language was determined with extreme confidence (significance level > 0.95). The languages categorization in terms of position in Chomsky's hierarchical language classification allows to compare Drosophila's body language not only with computer's compiler languages, but also with human-spoken languages. Drosophila's body language emerges to be at least as powerful as the languages spoken by humans.

  20. Schools and Languages in India.

    Science.gov (United States)

    Harrison, Brian

    1968-01-01

    A brief review of Indian education focuses on special problems caused by overcrowded schools, insufficient funding, and the status of education itself in the Indian social structure. Language instruction in India, a complex issue due largely to the numerous official languages currently spoken, is commented on with special reference to the problem…

  1. Usable, Real-Time, Interactive Spoken Language Systems

    Science.gov (United States)

    1992-09-01

    Workshop at Arden House, February 23-26,1992. Francis Kubala , et al, "BBN BYBLOS and HARC February 1992 ATIS Benchmark Results", 5th DARPA Speech...8217, presented at ICASSP, 1992. Richard Schwartz, Steve Austin, Francis Kubala , John Makhoul, Long Nguyen, Paul Placeway; George Zavaliagkos, Northeastern...of the DARPA Common Lexicon Working Group at the 5th DARPA Speech & NL Workshop at Arden House, February 23-26,1992. Francis Kubala is chairing the

  2. Spoken language identification system adaptation in under-resourced environments

    CSIR Research Space (South Africa)

    Kleynhans, N

    2013-12-01

    Full Text Available Speech Recognition (ASR) systems in the developing world is severely inhibited. Given that few task-specific corpora exist and speech technology systems perform poorly when deployed in a new environment, we investigate the use of acoustic model adaptation...

  3. Error Awareness and Recovery in Conversational Spoken Language Interfaces

    Science.gov (United States)

    2007-05-01

    stutters , false starts, repairs, hesitations, filled pauses, and various other non-lexical acoustic events. Under these circumstances, it is not...sensible choice from a software engineering perspective. The case for separating out various task-independent aspects of the conversation has in fact been...in behav- ior both within and across systems. It also represents a more sensible solution from a software engi- The RavenClaw error handling

  4. Pronoun forms and courtesy in spoken language in Tunja, Colombia

    Directory of Open Access Journals (Sweden)

    Gloria Avendaño de Barón

    2014-05-01

    Full Text Available This article presents the results of a research project whose aims were the following: to determine the frequency of the use of pronoun forms in polite treatment sumercé, usted and tú, according to differences in gender, age and level of education, among speakers in Tunja; to describe the sociodiscursive variations and to explain the relationship between usage and courtesy. The methodology of the Project for the Sociolinguistic Study of Spanish in Spain and in Latin America (PRESEEA was used, and a sample of 54 speakers was taken. The results indicate that the most frequently used pronoun in Tunja to express friendliness and affection is sumercé, followed by usted and tú; women and men of different generations and levels of education alternate the use of these three forms in the context of narrative, descriptive, argumentative and explanatory speech.

  5. Endowing Spoken Language Dialogue System with Emotional Intelligence

    DEFF Research Database (Denmark)

    André, Elisabeth; Rehm, Matthias; Minker, Wolfgang

    2004-01-01

    While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic variations in order to improve the user’s perception of the interaction. To accomplish this goal, our approach integrates a social theory of polite...... of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available....

  6. Recording voiceover the spoken word in media

    CERN Document Server

    Blakemore, Tom

    2015-01-01

    The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f

  7. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    This paper addresses the problem of information and service accessibility in mobile devices with limited resources. A solution is developed and tested through a prototype that applies state-of-the-art Distributed Speech Recognition (DSR) and knowledge-based Information Retrieval (IR) processing...... for spoken query answering. For the DSR part, a configurable DSR system is implemented on the basis of the ETSI-DSR advanced front-end and the SPHINX IV recognizer. For the knowledge-based IR part, a distributed system solution is developed for fast retrieval of the most relevant documents, with a text...

  8. Language and literacy development of deaf and hard-of-hearing children: successes and challenges.

    Science.gov (United States)

    Lederberg, Amy R; Schick, Brenda; Spencer, Patricia E

    2013-01-01

    Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to early identification/intervention, advanced technologies (e.g., cochlear implants), and perceptually accessible language models. DHH children develop sign language in a similar manner as hearing children develop spoken language, provided they are in a language-rich environment. This occurs naturally for DHH children of deaf parents, who constitute 5% of the deaf population. For DHH children of hearing parents, sign language development depends on the age that they are exposed to a perceptually accessible 1st language as well as the richness of input. Most DHH children are born to hearing families who have spoken language as a goal, and such development is now feasible for many children. Some DHH children develop spoken language in bilingual (sign-spoken language) contexts. For the majority of DHH children, spoken language development occurs in either auditory-only contexts or with sign supports. Although developmental trajectories of DHH children with hearing parents have improved with early identification and appropriate interventions, the majority of children are still delayed compared with hearing children. These DHH children show particular weaknesses in the development of grammar. Language deficits and differences have cascading effects in language-related areas of development, such as theory of mind and literacy development.

  9. Tagalog for Beginners. PALI Language Texts: Philippines.

    Science.gov (United States)

    Ramos, Teresita V.; de Guzman, Videa

    This language textbook is designed for beginning students of Tagalog, the principal language spoken on the island of Luzon in the Philippines. The introduction discusses the history of Tagalog and certain features of the language. An explanation of the text is given, along with notes for the teacher. The text itself is divided into nine sections:…

  10. Approaches for Language Identification in Mismatched Environments

    Science.gov (United States)

    2016-09-08

    domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...Approaches for Language Identification in Mismatched Environments Shahan Nercessian, Pedro Torres-Carrasquillo, and Gabriel Martínez-Montes...consider the task of language identification in the context of mismatch conditions. Specifically, we address the issue of using unlabeled data in the

  11. Audience Effects in American Sign Language Interpretation

    Science.gov (United States)

    Weisenberg, Julia

    2009-01-01

    There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…

  12. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-01-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  13. MINORITY LANGUAGES IN ESTONIAN SEGREGATIVE LANGUAGE ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    Elvira Küün

    2011-01-01

    Full Text Available The goal of this project in Estonia was to determine what languages are spoken by students from the 2nd to the 5th year of basic school at their homes in Tallinn, the capital of Estonia. At the same time, this problem was also studied in other segregated regions of Estonia: Kohtla-Järve and Maardu. According to the database of the population census from the year 2000 (Estonian Statistics Executive Office's census 2000, there are representatives of 142 ethnic groups living in Estonia, speaking a total of 109 native languages. At the same time, the database doesn’t state which languages are spoken at homes. The material presented in this article belongs to the research topic “Home Language of Basic School Students in Tallinn” from years 2007–2008, specifically financed and ordered by the Estonian Ministry of Education and Research (grant No. ETF 7065 in the framework of an international study called “Multilingual Project”. It was determined what language is dominating in everyday use, what are the factors for choosing the language for communication, what are the preferred languages and language skills. This study reflects the actual trends of the language situation in these cities.

  14. Notes from the Field: Lolak--Another Moribund Language of Indonesia, with Supporting Audio

    Science.gov (United States)

    Lobel, Jason William; Paputungan, Ade Tatak

    2017-01-01

    This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…

  15. Expressing Identity through Lesser-Used Languages: Examples from the Irish and Galician Contexts

    Science.gov (United States)

    O'Rourke, Bernadette

    2005-01-01

    This paper looks at the degree and way in which lesser-used languages are used as expressions of identity, focusing specifically on two of Europe's lesser-used languages. The first is Irish, spoken in the Republic of Ireland and the second is Galician, spoken in the Autonomous Community of Galicia in the North-western part of Spain. The paper…

  16. Bridging the Gap: The Development of Appropriate Educational Strategies for Minority Language Communities in the Philippines

    Science.gov (United States)

    Dekker, Diane; Young, Catherine

    2005-01-01

    There are more than 6000 languages spoken by the 6 billion people in the world today--however, those languages are not evenly divided among the world's population--over 90% of people globally speak only about 300 majority languages--the remaining 5700 languages being termed "minority languages". These languages represent the…

  17. Teaching English as a "Second Language" in Kenya and the United States: Convergences and Divergences

    Science.gov (United States)

    Roy-Campbell, Zaline M.

    2015-01-01

    English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…

  18. A grammar of Abui : A Papuan language of Alor

    NARCIS (Netherlands)

    Kratochvil, František

    2007-01-01

    This work contains the first comprehensive description of Abui, a language of the Trans New Guinea family spoken approximately by 16,000 speakers in the central part of the Alor Island in Eastern Indonesia. The description focuses on the northern dialect of Abui as spoken in the village

  19. Attention to spoken word planning: Chronometric and neuroimaging evidence

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,

  20. Spoken Grammar: Where Are We and Where Are We Going?

    Science.gov (United States)

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  1. Enhancing the Performance of Female Students in Spoken English

    Science.gov (United States)

    Inegbeboh, Bridget O.

    2009-01-01

    Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…

  2. Presentation video retrieval using automatically recovered slide and spoken text

    Science.gov (United States)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  3. CASL L2 milestone report : VUQ.Y1.03, %22Enable statistical sensitivity and UQ demonstrations for VERA.%22

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Yixing (Westinghouse Electric Company LLC, Cranberry Township, PA); Adams, Brian M.; Witkowski, Walter R.

    2011-04-01

    The CASL Level 2 Milestone VUQ.Y1.03, 'Enable statistical sensitivity and UQ demonstrations for VERA,' was successfully completed in March 2011. The VUQ focus area led this effort, in close partnership with AMA, and with support from VRI. DAKOTA was coupled to VIPRE-W thermal-hydraulics simulations representing reactors of interest to address crud-related challenge problems in order to understand the sensitivity and uncertainty in simulation outputs with respect to uncertain operating and model form parameters. This report summarizes work coupling the software tools, characterizing uncertainties, selecting sensitivity and uncertainty quantification algorithms, and analyzing the results of iterative studies. These demonstration studies focused on sensitivity and uncertainty of mass evaporation rate calculated by VIPRE-W, a key predictor for crud-induced power shift (CIPS).

  4. About phonological, grammatical, and semantic accents in bilinguals' language use and their cause

    NARCIS (Netherlands)

    de Groot, A.M.B.; Filipović, L.; Pütz, M.

    2014-01-01

    The linguistic expressions of the majority of bilinguals exhibit deviations from the corresponding expressions of monolinguals in phonology, grammar, and semantics, and in both languages. In addition, bilinguals may process spoken and written language differently from monolinguals. Two possible

  5. PERSON climbing up a tree (and other adventures in sign language grammaticalization)

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.

    2013-01-01

    Studies on sign language grammaticalization have demonstrated that most of the attested diachronic changes from lexical to functional element parallel those previously described for spoken languages. To date, most of these studies are either descriptive in nature or embedded within

  6. I Feel You: The Design and Evaluation of a Domotic Affect-Sensitive Spoken Conversational Agent

    Directory of Open Access Journals (Sweden)

    Juan Manuel Montero

    2013-08-01

    Full Text Available We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO. In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA. NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users’ frustrations and, ultimately, improving their satisfaction.

  7. I feel you: the design and evaluation of a domotic affect-sensitive spoken conversational agent.

    Science.gov (United States)

    Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel

    2013-08-13

    We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction.

  8. Adaptation to Pronunciation Variations in Indonesian Spoken Query-Based Information Retrieval

    Science.gov (United States)

    Lestari, Dessi Puji; Furui, Sadaoki

    Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.

  9. Beyond Languages, beyond Modalities: Transforming the Study of Semiotic Repertoires

    Science.gov (United States)

    Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina

    2017-01-01

    This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…

  10. A matter of complexity: Subordination in sign languages

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Herrmann, A.

    2016-01-01

    Since natural languages exist in two different modalities - the visual-gestural modality of sign languages and the auditory-oral modality of spoken languages - it is obvious that all fields of research in modern linguistics will benefit from research on sign languages. Although previous studies have

  11. Poetry in South African Sign Language: What is different? | Baker ...

    African Journals Online (AJOL)

    Poetry in a sign language can make use of literary devices just as poetry in a spoken language can. The study of literary expression in sign languages has increased over the last twenty years and for South African Sign Language (SASL) such literary texts have also become more available. This article gives a brief overview ...

  12. Language policy and speech practice in Cape Town: An exploratory ...

    African Journals Online (AJOL)

    Language policy and speech practice in Cape Town: An exploratory public health sector study. Michellene Williams, Simon Bekker. Abstract. Public language policy in South Africa recognises 11 official spoken languages. In Cape Town, and in the Western Cape, three of these eleven languages have been selected for ...

  13. SyllabO+: A new tool to study sublexical phenomena in spoken Quebec French.

    Science.gov (United States)

    Bédard, Pascale; Audet, Anne-Marie; Drouin, Patrick; Roy, Johanna-Pascale; Rivard, Julie; Tremblay, Pascale

    2017-10-01

    Sublexical phonotactic regularities in language have a major impact on language development, as well as on speech processing and production throughout the entire lifespan. To understand the impact of phonotactic regularities on speech and language functions at the behavioral and neural levels, it is essential to have access to oral language corpora to study these complex phenomena in different languages. Yet, probably because of their complexity, oral language corpora remain less common than written language corpora. This article presents the first corpus and database of spoken Quebec French syllables and phones: SyllabO+. This corpus contains phonetic transcriptions of over 300,000 syllables (over 690,000 phones) extracted from recordings of 184 healthy adult native Quebec French speakers, ranging in age from 20 to 97 years. To ensure the representativeness of the corpus, these recordings were made in both formal and familiar communication contexts. Phonotactic distributional statistics (e.g., syllable and co-occurrence frequencies, percentages, percentile ranks, transition probabilities, and pointwise mutual information) were computed from the corpus. An open-access online application to search the database was developed, and is available at www.speechneurolab.ca/syllabo . In this article, we present a brief overview of the corpus, as well as the syllable and phone databases, and we discuss their practical applications in various fields of research, including cognitive neuroscience, psycholinguistics, neurolinguistics, experimental psychology, phonetics, and phonology. Nonacademic practical applications are also discussed, including uses in speech-language pathology.

  14. Teaching and Learning Sign Language as a “Foreign” Language ...

    African Journals Online (AJOL)

    In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...

  15. On the Usability of Spoken Dialogue Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Bo

     This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...... model roughly explains 50% of the observed variance in the user satisfaction based on measures of task success and speech recognition accuracy, a result similar to those obtained at AT&T. The applied methods are discussed and evaluated critically....

  16. Teaching natural language to computers

    OpenAIRE

    Corneli, Joseph; Corneli, Miriam

    2016-01-01

    "Natural Language," whether spoken and attended to by humans, or processed and generated by computers, requires networked structures that reflect creative processes in semantic, syntactic, phonetic, linguistic, social, emotional, and cultural modules. Being able to produce novel and useful behavior following repeated practice gets to the root of both artificial intelligence and human language. This paper investigates the modalities involved in language-like applications that computers -- and ...

  17. The Peculiarities of the Adverbs Functioning of the Dialect Spoken in the v. Shevchenkove, Kiliya district, Odessa Region

    Directory of Open Access Journals (Sweden)

    Maryna Delyusto

    2013-08-01

    Full Text Available The article gives new evidence about the adverb as a part of the grammatical system of the Ukrainian steppe dialect spread in the area between the Danube and the Dniester rivers. The author proves that the grammatical system of the dialect spoken in the v. Shevchenkove, Kiliya district, Odessa region is determined by the historical development of the Ukrainian language rather than the influence of neighboring dialects.

  18. Language as a multimodal phenomenon: implications for language learning, processing and evolution.

    Science.gov (United States)

    Vigliocco, Gabriella; Perniss, Pamela; Vinson, David

    2014-09-19

    Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Comparison of Word Intelligibility in Spoken and Sung Phrases

    Directory of Open Access Journals (Sweden)

    Lauren B. Collister

    2008-09-01

    Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.

  20. Basic speech recognition for spoken dialogues

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-09-01

    Full Text Available speech recognisers for a diverse multitude of languages. The paper investigates the feasibility of developing small-vocabulary speaker-independent ASR systems designed for use in a telephone-based information system, using ten resource-scarce languages...

  1. One grammar or two? Sign Languages and the Nature of Human Language.

    Science.gov (United States)

    Lillo-Martin, Diane C; Gajewski, Jon

    2014-07-01

    Linguistic research has identified abstract properties that seem to be shared by all languages-such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality but also in the form of sign languages has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language-in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression. WIREs Cogn Sci 2014, 5:387-401. doi: 10.1002/wcs.1297 This article is categorized under: Linguistics > Linguistic Theory. © 2014 The Authors. WIREs Cognitive Science published by John Wiley & Sons, Ltd.

  2. Predictors of reading comprehension ability in primary school-aged children who have pragmatic language impairment.

    Science.gov (United States)

    Freed, Jenny; Adams, Catherine; Lockton, Elaine

    2015-01-01

    Children who have pragmatic language impairment (CwPLI) have difficulties with the use of language in social contexts and show impairments in above-sentence level language tasks. Previous studies have found that typically developing children's reading comprehension (RC) is predicted by reading accuracy and spoken sentence level comprehension (SLC). This study explores the predictive ability of these factors and above-sentence level comprehension (ASLC) on RC skills in a group of CwPLI. Sixty nine primary school-aged CwPLI completed a measure of RC along with measures of reading accuracy, spoken SLC and both visual (pictorially presented) and spoken ASLC tasks. Regression analyses showed that reading accuracy was the strongest predictor of RC. Visual ASLC did not explain unique variance in RC on top of spoken SLC. In contrast, a measure of spoken ASLC explained unique variance in RC, independent from that explained by spoken SLC. A regression model with nonverbal intelligence, reading accuracy, spoken SLC and spoken ASLC as predictors explained 74.2% of the variance in RC. Findings suggest that spoken ASLC may measure additional factors that are important for RC success in CwPLI and should be included in routine assessments for language and literacy learning in this group. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Use of Spoken and Written Japanese Did Not Protect Japanese-American Men From Cognitive Decline in Late Life

    Science.gov (United States)

    Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-01-01

    Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282

  4. The Road to Language Learning Is Not Entirely Iconic: Iconicity, Neighborhood Density, and Frequency Facilitate Acquisition of Sign Language.

    Science.gov (United States)

    Caselli, Naomi K; Pyers, Jennie E

    2017-07-01

    Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.

  5. Training to Improve Language Outcomes in Cochlear Implant Recipients

    OpenAIRE

    Ingvalson, Erin M.; Wong, Patrick C. M.

    2013-01-01

    Cochlear implants (CI) have brought with them hearing ability for many prelingually deafened children. Advances in CI technology have brought not only hearing ability but speech perception to these same children. Concurrent with the development of speech perception has come spoken language development, and one goal now is that prelingually deafened CI recipient children will develop spoken language capabilities on par with those of normal hearing (NH) children. This goal has not been met pure...

  6. Germanic heritage languages in North America: Acquisition, attrition and change

    OpenAIRE

    Johannessen, Janne Bondi; Salmons, Joseph C.; Westergaard, Marit; Anderssen, Merete; Arnbjörnsdóttir, Birna; Allen, Brent; Pierce, Marc; Boas, Hans C.; Roesch, Karen; Brown, Joshua R.; Putnam, Michael; Åfarli, Tor A.; Newman, Zelda Kahan; Annear, Lucas; Speth, Kristin

    2015-01-01

    This book presents new empirical findings about Germanic heritage varieties spoken in North America: Dutch, German, Pennsylvania Dutch, Icelandic, Norwegian, Swedish, West Frisian and Yiddish, and varieties of English spoken both by heritage speakers and in communities after language shift. The volume focuses on three critical issues underlying the notion of ‘heritage language’: acquisition, attrition and change. The book offers theoretically-informed discussions of heritage language processe...

  7. The road to language learning is iconic: evidence from British Sign Language.

    Science.gov (United States)

    Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella

    2012-12-01

    An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.

  8. Verb Errors in Advanced Spoken English

    Directory of Open Access Journals (Sweden)

    Tomáš Gráf

    2017-07-01

    Full Text Available As an experienced teacher of advanced learners of English I am deeply aware of recurrent problems which these learners experience as regards grammatical accuracy. In this paper, I focus on researching inaccuracies in the use of verbal categories. I draw the data from a spoken learner corpus LINDSEI_CZ and analyze the performance of 50 advanced (C1–C2 learners of English whose mother tongue is Czech. The main method used is Computer-aided Error Analysis within the larger framework of Learner Corpus Research. The results reveal that the key area of difficulty is the use of tenses and tense agreements, and especially the use of the present perfect. Other error-prone aspects are also described. The study also identifies a number of triggers which may lie at the root of the problems. The identification of these triggers reveals deficiencies in the teaching of grammar, mainly too much focus on decontextualized practice, use of potentially confusing rules, and the lack of attempt to deal with broader notions such as continuity and perfectiveness. Whilst the study is useful for the teachers of advanced learners, its pedagogical implications stretch to lower levels of proficiency as well.

  9. Talker and background noise specificity in spoken word recognition memory

    OpenAIRE

    Cooper, Angela; Bradlow, Ann R.

    2017-01-01

    Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...

  10. ASSESSING THE SO CALLED MARKED INFLECTIONAL FEATURES OF NIGERIAN ENGLISH: A SECOND LANGUAGE ACQUISITION THEORY ACCOUNT

    OpenAIRE

    Boluwaji Oshodi

    2014-01-01

    There are conflicting claims among scholars on whether the structural outputs of the types of English spoken in countries where English is used as a second language gives such speech forms the status of varieties of English. This study examined those morphological features considered to be marked features of the variety spoken in Nigeria according to Kirkpatrick (2011) and the variety spoken in Malaysia by considering the claims of the Missing Surface Inflection Hypothesis (MSIH) a Second Lan...

  11. SOUTH AFRICAN SIGN LANGUAGE: CHANGING POLICIES AND ...

    African Journals Online (AJOL)

    South Africa have been affected by the policies of apartheid, and its educational and linguistic consequences, in a .... teaching strategies, and more recently of the perception that a signed language is a manual form ... of color and on the basis of the (former) official spoken languages designated by the apartheid education ...

  12. Indigenous Scripts Of African Languages | Meshesha | Indilinga ...

    African Journals Online (AJOL)

    In Africa there are a number of languages spoken, some of which have their own indigenous scripts that are used for writing. In this paper we assess these languages and present an in-depth script analysis for the Amharic writing system, one of the well-known indigenous scripts of Africa. Amharic is the official and working ...

  13. THE STRUCTURE OF THE ARABIC LANGUAGE.

    Science.gov (United States)

    YUSHMANOV, N.V.

    THE PRESENT STUDY IS A TRANSLATION OF THE WORK "STROI ARABSKOGO YAZYKA" BY THE EMINENT RUSSIAN LINGUIST AND SEMITICS SCHOLAR, N.Y. YUSHMANOV. IT DEALS CONCISELY WITH THE POSITION OF ARABIC AMONG THE SEMITIC LANGUAGES AND THE RELATION OF THE LITERARY (CLASSICAL) LANGUAGE TO THE VARIOUS MODERN SPOKEN DIALECTS, AND PRESENTS A CONDENSED BUT…

  14. DOCUMENTATION OF AFRICAN LANGUAGES: A PANACEA FOR ...

    African Journals Online (AJOL)

    user

    bilingualism and extensive translation. May it be noted that all the languages of the world. (7000 in number (cf. Akinlabi and Connell, 2007)) cannot be spoken even skeletally by any individual. Therefore the multilingualism proposed by Crystal will have to favor only a few world languages. Toolan's extensive bilingualism is ...

  15. Language Skills: Questions for Teaching and Learning

    Science.gov (United States)

    Paran, Amos

    2012-01-01

    This paper surveys some of the changes in teaching the four language skills in the past 15 years. It focuses on two main changes for each skill: understanding spoken language and willingness to communicate for speaking; product, process, and genre approaches and a focus on feedback for writing; extensive reading and literature for reading; and…

  16. Speech Recognition System and Formant Based Analysis of Spoken Arabic Vowels

    Science.gov (United States)

    Alotaibi, Yousef Ajami; Hussain, Amir

    Arabic is one of the world's oldest languages and is currently the second most spoken language in terms of number of speakers. However, it has not received much attention from the traditional speech processing research community. This study is specifically concerned with the analysis of vowels in modern standard Arabic dialect. The first and second formant values in these vowels are investigated and the differences and similarities between the vowels are explored using consonant-vowels-consonant (CVC) utterances. For this purpose, an HMM based recognizer was built to classify the vowels and the performance of the recognizer analyzed to help understand the similarities and dissimilarities between the phonetic features of vowels. The vowels are also analyzed in both time and frequency domains, and the consistent findings of the analysis are expected to facilitate future Arabic speech processing tasks such as vowel and speech recognition and classification.

  17. When space merges into language.

    Science.gov (United States)

    Rinaldi, M Cristina; Pizzamiglio, Luigi

    2006-01-01

    We present data from right brain-damaged patients, with and without spatial heminattention, which show the influence of hemispatial deficits on spoken language processing. We explored the findings of a previous study, which used an emphatic stress detection task and suggested spatial transcoding of a spoken active sentence in a 'language line'. This transcoding was impaired in its initial portion (the subject-word) when the neglect syndrome was present. By expanding the original methodology, the present study provides a deeper understanding of the level of spoken language processing involved in the heminattentional bias. To ascertain the role played by syntactic structure, active and passive sentences were compared. Sentences comprised of musical notes and of a sequence of unrelated nouns were also compared to determine whether the bias was manifest with any sequence of events (not only linguistic ones) deployed over time, and with a sequence of linguistic events not embedded in a structured syntactic frame. Results showed that heminattention exerted an influence only when a syntactically structured linguistic input (=sentence with agent of action, action and recipient of action) was processed, and that it did not interfere when a sequence of non-linguistic sounds or unrelated words was presented. Furthermore, when passing from active to passive sentences, the heminattentional bias was inverted, suggesting that heminattention primarily involves the logical subject of the sentence, which has an inverted position in passive sentences. These results strongly suggest that heminattention acts on the spatial transcoding of the deep structure of spoken language.

  18. Insight into the neurophysiological processes of melodically intoned language with functional MRI.

    Science.gov (United States)

    Méndez Orellana, Carolina P; van de Sandt-Koenderman, Mieke E; Saliasi, Emi; van der Meulen, Ineke; Klip, Simone; van der Lugt, Aad; Smits, Marion

    2014-09-01

    Melodic Intonation Therapy (MIT) uses the melodic elements of speech to improve language production in severe nonfluent aphasia. A crucial element of MIT is the melodically intoned auditory input: the patient listens to the therapist singing a target utterance. Such input of melodically intoned language facilitates production, whereas auditory input of spoken language does not. Using a sparse sampling fMRI sequence, we examined the differential auditory processing of spoken and melodically intoned language. Nineteen right-handed healthy volunteers performed an auditory lexical decision task in an event related design consisting of spoken and melodically intoned meaningful and meaningless items. The control conditions consisted of neutral utterances, either melodically intoned or spoken. Irrespective of whether the items were normally spoken or melodically intoned, meaningful items showed greater activation in the supramarginal gyrus and inferior parietal lobule, predominantly in the left hemisphere. Melodically intoned language activated both temporal lobes rather symmetrically, as well as the right frontal lobe cortices, indicating that these regions are engaged in the acoustic complexity of melodically intoned stimuli. Compared to spoken language, melodically intoned language activated sensory motor regions and articulatory language networks in the left hemisphere, but only when meaningful language was used. Our results suggest that the facilitatory effect of MIT may - in part - depend on an auditory input which combines melody and meaning. Combined melody and meaning provide a sound basis for the further investigation of melodic language processing in aphasic patients, and eventually the neurophysiological processes underlying MIT.

  19. AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes.

    Science.gov (United States)

    Schillingmann, Lars; Ernst, Jessica; Keite, Verena; Wrede, Britta; Meyer, Antje S; Belke, Eva

    2018-01-29

    In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool's performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.

  20. Encoding lexical tones in jTRACE: a simulation of monosyllabic spoken word recognition in Mandarin Chinese.

    Science.gov (United States)

    Shuai, Lan; Malins, Jeffrey G

    2017-02-01

    Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.

  1. How Facebook Can Revitalise Local Languages: Lessons from Bali

    Science.gov (United States)

    Stern, Alissa Joy

    2017-01-01

    For a language to survive, it must be spoken and passed down to the next generation. But how can we engage teenagers--so crucial for language transmission--to use and value their local tongue when they are bombarded by pressures from outside and from within their society to only speak national and international languages? This paper analyses the…

  2. Australian Aboriginal Deaf People and Aboriginal Sign Language

    Science.gov (United States)

    Power, Des

    2013-01-01

    Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…

  3. Rethinking Comprehension and Strategy Use in Second Language Listening Instruction

    Science.gov (United States)

    Ling, Bernadette; Kettle, Margaret

    2011-01-01

    In second language classrooms, listening is gaining recognition as an active element in the processes of learning and using a second language. Currently, however, much of the teaching of listening prioritises comprehension without sufficient emphasis on the skills and strategies that enhance learners' understanding of spoken language. This paper…

  4. The Unified Phonetic Transcription for Teaching and Learning Chinese Languages

    Science.gov (United States)

    Shieh, Jiann-Cherng

    2011-01-01

    In order to preserve distinctive cultures, people anxiously figure out writing systems of their languages as recording tools. Mandarin, Taiwanese and Hakka languages are three major and the most popular dialects of Han languages spoken in Chinese society. Their writing systems are all in Han characters. Various and independent phonetic…

  5. Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy

    2016-01-01

    Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…

  6. The Role of Pronunciation in SENCOTEN Language Revitalization

    Science.gov (United States)

    Bird, Sonya; Kell, Sarah

    2017-01-01

    Most Indigenous language revitalization programs in Canada currently emphasize spoken language. However, virtually no research has been done on the role of pronunciation in the context of language revitalization. This study set out to gain an understanding of attitudes around pronunciation in the SENCOTEN-speaking community, in order to determine…

  7. Dilemmatic Aspects of Language Policies in a Trilingual Preschool Group

    Science.gov (United States)

    Puskás, Tünde; Björk-Willén, Polly

    2017-01-01

    This article explores dilemmatic aspects of language policies in a preschool group in which three languages (Swedish, Romani and Arabic) are spoken on an everyday basis. The article highlights the interplay between policy decisions on the societal level, the teachers' interpretations of these policies, as well as language practices on the micro…

  8. Structural borrowing: The case of Kenyan Sign Language (KSL) and ...

    African Journals Online (AJOL)

    a case for the existence of a Kiswahili sign language since KSL is a natural language with its own autonomous grammar distinct from that of any spoken language. In this paper, we shall argue that the Kiswahili mouthed KSL signs are an outcome of contact between KSL – Kiswahili bilinguals and their hearing Kiswahili ...

  9. Making a Difference: Language Teaching for Intercultural and International Dialogue

    Science.gov (United States)

    Byram, Michael; Wagner, Manuela

    2018-01-01

    Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…

  10. The Ndebele Language Corpus: A Review of Some Factors ...

    African Journals Online (AJOL)

    The Ndebele language corpus described here is that compiled by the ALLEX Project (now ALRI) at the University of Zimbabwe. It is intended to reflect as much as possible the Ndebele language as spoken in Zimbabwe. The Ndebele language corpus was built in order to provide much-needed material for the study of the ...

  11. El Espanol como Idioma Universal (Spanish as a Universal Language)

    Science.gov (United States)

    Mijares, Jose

    1977-01-01

    A proposal to transform Spanish into a universal language because it possesses the prerequisites: it is a living language, spoken in several countries; it is a natural language; and it uses the ordinary alphabet. Details on simplification and standardization are given. (Text is in Spanish.) (AMH)

  12. Prosodic and narrative processing in American Sign Language: An fMRI study

    Science.gov (United States)

    Newman, Aaron J.; Supalla, Ted; Hauser, Peter; Newport, Elissa; Bavelier, Daphne

    2010-01-01

    Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages, but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices, as well as the basal ganglia, medial frontal and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages. PMID:20347996

  13. Regional association of pCASL-MRI with FDG-PET and PiB-PET in people at risk for autosomal dominant Alzheimer's disease.

    Science.gov (United States)

    Yan, Lirong; Liu, Collin Y; Wong, Koon-Pong; Huang, Sung-Cheng; Mack, Wendy J; Jann, Kay; Coppola, Giovanni; Ringman, John M; Wang, Danny J J

    2018-01-01

    Autosomal dominant Alzheimer's disease (ADAD) is a small subset of Alzheimer's disease that is genetically determined with 100% penetrance. It provides a valuable window into studying the course of pathologic processes that leads to dementia. Arterial spin labeling (ASL) MRI is a potential AD imaging marker that non-invasively measures cerebral perfusion. In this study, we investigated the relationship of cerebral blood flow measured by pseudo-continuous ASL (pCASL) MRI with measures of cerebral metabolism (FDG PET) and amyloid deposition (Pittsburgh Compound B (PiB) PET). Thirty-one participants at risk for ADAD (age 39 ± 13 years, 19 females) were recruited into this study, and 21 of them received both MRI and FDG and PiB PET scans. Considerable variability was observed in regional correlations between ASL-CBF and FDG across subjects. Both regional hypo-perfusion and hypo-metabolism were associated with amyloid deposition. Cross-sectional analyses of each biomarker as a function of the estimated years to expected dementia diagnosis indicated an inverse relationship of both perfusion and glucose metabolism with amyloid deposition during AD development. These findings indicate that neurovascular dysfunction is associated with amyloid pathology, and also indicate that ASL CBF may serve as a sensitive early biomarker for AD. The direct comparison among the three biomarkers provides complementary information for understanding the pathophysiological process of AD.

  14. The Use of Multimedia and the Arts in Language Revitalization, Maintenance, and Development: The Case of the Balsas Nahuas of Guerrero, Mexico.

    Science.gov (United States)

    Farfan, Jose Antonio Flores

    Even though Nahuatl is the most widely spoken indigenous language in Mexico, it is endangered. Threats include poor support for Nahuatl-speaking communities, migration of Nahuatl speakers to cities where English and Spanish are spoken, prejudicial attitudes toward indigenous languages, lack of contact between small communities of different…

  15. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    extent of the emphasis on the acquisition vocabulary in school curricula. After a brief introduction, the author looks in chapter 2 at major books which in the. 20th century worked on a controlled vocabulary for foreign-language learners in Europe, Asia and America. This section provides the background for the elaboration of ...

  16. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    Gabriele Stein. Developing Your English Vocabulary: A Systematic New. Approach. 2002, VIII + 272 pp. ... objective of this book is twofold: to compile a lexical core and to maximise the skills of language students by ... chapter 3, she offers twelve major ways of expanding this core-word list and differentiating lexical items to ...

  17. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    data of the corpus and includes more formal audio material (lectures, TV and radio broadcasting). The book begins with a 20-page introduction, which is sometimes quite technical, but ... grounds words that belong to the core vocabulary of the language such as tool-. Lexikos 15 (AFRILEX-reeks/series 15: 2005): 338-339 ...

  18. Grammar of Kove: An Austronesian Language of the West New Britain Province, Papua New Guinea

    Science.gov (United States)

    Sato, Hiroko

    2013-01-01

    This dissertation is a descriptive grammar of Kove, an Austronesian language spoken in the West New Britain Province of Papua New Guinea. Kove is primarily spoken in 18 villages, including some on the small islands north of New Britain. There are about 9,000 people living in the area, but many are not fluent speakers of Kove. The dissertation…

  19. Languages in contact: preliminary clues of an emergence of an Israeli Arabic variety

    NARCIS (Netherlands)

    Dekel, N.; Brosh, H.

    2012-01-01

    This paper describes from a linguistic point of view the impact of the Hebrew spoken in Israel on the Arabic spoken natively by Israeli Arabs. Two main conditions enable mutual influences between Hebrew and Arabic in Israel: - The existence of two large groups of people speaking both languages

  20. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  1. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    Science.gov (United States)

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Does space structure spatial language? A comparison of spatial expression across sign languages

    NARCIS (Netherlands)

    Perniss, P.M.; Zwitserlood, I.E.P.; Özyürek, A.

    2015-01-01

    The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature

  3. Textese and use of texting by children with typical language development and Specific Language Impairment

    NARCIS (Netherlands)

    Blom, W.B.T.; van Dijk, Chantal; Vasic, Nada; van Witteloostuijn, Merel; Avrutin, S.

    2017-01-01

    The purpose of this study was to investigate texting and textese, which is the special register used for sending brief text messages, across children with typical development (TD) and children with Specific Language Impairment (SLI). Using elicitation techniques, texting and spoken language messages

  4. South African Sign Language and language-in-education policy in ...

    African Journals Online (AJOL)

    KATEVG

    bilingualism in the natural sign language and the dominant spoken language of the society. Students would study not only the common curriculum shared with their hearing peers, but would also study the history of the Deaf culture and Deaf communities in other parts of the world. Thus, the goal of such a programme would ...

  5. On the Conventionalization of Mouth Actions in Australian Sign Language.

    Science.gov (United States)

    Johnston, Trevor; van Roekel, Jane; Schembri, Adam

    2016-03-01

    This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.

  6. Advances in natural language processing.

    Science.gov (United States)

    Hirschberg, Julia; Manning, Christopher D

    2015-07-17

    Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.

  7. Alsatian versus Standard German: Regional Language Bilingual Primary Education in Alsace

    Science.gov (United States)

    Harrison, Michelle Anne

    2016-01-01

    This article examines the current situation of regional language bilingual primary education in Alsace and contends that the regional language presents a special case in the context of France. The language comprises two varieties: Alsatian, which traditionally has been widely spoken, and Standard German, used as the language of reference and…

  8. Language Shift or Increased Bilingualism in South Africa: Evidence from Census Data

    Science.gov (United States)

    Posel, Dorrit; Zeller, Jochen

    2016-01-01

    In the post-apartheid era, South Africa has adopted a language policy that gives official status to 11 languages (English, Afrikaans, and nine Bantu languages). However, English has remained the dominant language of business, public office, and education, and some research suggests that English is increasingly being spoken in domestic settings.…

  9. Mapudungun According to Its Speakers: Mapuche Intellectuals and the Influence of Standard Language Ideology

    Science.gov (United States)

    Lagos, Cristián; Espinoza, Marco; Rojas, Darío

    2013-01-01

    In this paper, we analyse the cultural models (or folk theory of language) that the Mapuche intellectual elite have about Mapudungun, the native language of the Mapuche people still spoken today in Chile as the major minority language. Our theoretical frame is folk linguistics and studies of language ideology, but we have also taken an applied…

  10. A Rationale To Integrate Dialog Journal Writing in the Foreign Language Conversation Class.

    Science.gov (United States)

    de Godev, Concepcion B.

    The need to underline the relationship between spoken and written language in second language instruction is discussed, and the use of student dialogue journals to accomplish this is encouraged. The first section offers an overview of the whole language approach, which emphasizes integration of language skills. The second section examines briefly…

  11. Key Data on Teaching Languages at School in Europe. 2017 Edition. Eurydice Report

    Science.gov (United States)

    Baïdak, Nathalie; Balcon, Marie-Pascale; Motiejunaite, Akvile

    2017-01-01

    Linguistic diversity is part of Europe's DNA. It embraces not only the official languages of Member States, but also the regional and/or minority languages spoken for centuries on European territory, as well as the languages brought by the various waves of migrants. The coexistence of this variety of languages constitutes an asset, but it is also…

  12. Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2015-06-01

    Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.

  13. METONYMY BASED ON CULTURAL BACKGROUND KNOWLEDGE AND PRAGMATIC INFERENCING: EVIDENCE FROM SPOKEN DISCOURSE

    Directory of Open Access Journals (Sweden)

    Arijana Krišković

    2009-01-01

    Full Text Available Th e characterization of metonymy as a conceptual tool for guiding inferencing in language has opened a new fi eld of study in cognitive linguistics and pragmatics. To appreciate the value of metonymy for pragmatic inferencing, metonymy should not be viewed as performing only its prototypical referential function. Metonymic mappings are operative in speech acts at the level of reference, predication, proposition and illocution. Th e aim of this paper is to study the role of metonymy in pragmatic inferencing in spoken discourse in televison interviews. Case analyses of authentic utterances classifi ed as illocutionary metonymies following the pragmatic typology of metonymic functions are presented. Th e inferencing processes are facilitated by metonymic connections existing between domains or subdomains in the same functional domain. It has been widely accepted by cognitive linguists that universal human knowledge and embodiment are essential for the interpretation of metonymy. Th is analysis points to the role of cultural background knowledge in understanding target meanings. All these aspects of metonymic connections are exploited in complex inferential processes in spoken discourse. In most cases, metaphoric mappings are also a part of utterance interpretation.

  14. Word order in the Germanic languages

    DEFF Research Database (Denmark)

    Holmberg, Anders; Rijkhoff, Jan

    1998-01-01

    The Germanic branch of Indo-European consists of three main groups (Ruhlen 1987: 327):- East Germanic: Gothic, Vandalic, Burgundian (all extinct);- North Germanic (or: Scandinavian): Runic (extinct), Danish, Swedish, Norwegian, Icelandic, Faroese;- West Germanic: German, Yiddish, Luxembourgeois......, Dutch, Afrikaans, Frisian, English.Here we will only consider the languages that are currently spoken in geographical Europe. Thus Afrikaans, which is spoken in South Africa, and the extinct languages Gothic, Vandalic, Burgundian, and Runic will not be taken into account (but see e.g. König & van der...

  15. The determinants of spoken and written picture naming latencies.

    Science.gov (United States)

    Bonin, Patrick; Chalard, Marylène; Méot, Alain; Fayol, Michel

    2002-02-01

    The influence of nine variables on the latencies to write down or to speak aloud the names of pictures taken from Snodgrass and Vanderwart (1980) was investigated in French adults. The major determinants of both written and spoken picture naming latencies were image variability, image agreement and age of acquisition. To a lesser extent, name agreement was also found to have an impact in both production modes. The implications of the findings for theoretical views of both spoken and written picture naming are discussed.

  16. Syntactic Priming in American Sign Language

    OpenAIRE

    Hall, Matthew L.; Ferreira, Victor S.; Mayberry, Rachel I.

    2015-01-01

    Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic primin...

  17. Early Sign Language Exposure and Cochlear Implantation Benefits.

    Science.gov (United States)

    Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S

    2017-07-01

    Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.

  18. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    Science.gov (United States)

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  19. "They never realized that, you know": linguistic collocation and interactional functions of you know in contemporary academin spoken english

    Directory of Open Access Journals (Sweden)

    Rodrigo Borba

    2012-12-01

    Full Text Available Discourse markers are a collection of one-word or multiword terms that help language users organize their utterances on the grammar, semantic, pragmatic and interactional levels. Researchers have characterized some of their roles in written and spoken discourse (Halliday & Hasan, 1976, Schffrin, 1988, 2001. Following this trend, this paper advances a discussion of discourse markers in contemporary academic spoken English. Through quantitative and qualitative analyses of the use of the discourse marker ‘you know’ in the Michigan Corpus of Academic Spoken English (MICASE we describe its frequency in this corpus, its collocation on the sentence level and its interactional functions. Grammatically, a concordance analysis shows that you know (as other discourse markers is linguistically fl exible as it seems to be placed in any grammatical slot of an utterance. Interactionally, a qualitative analysis indicates that its use in contemporary English goes beyond the uses described in the literature. We defend that besides serving as a hedging strategy (Lakoff, 1975, you know also serves as a powerful face-saving (Goffman, 1955 technique which constructs students’ identities vis-à-vis their professors’ and vice-versa.

  20. Rhythm in language acquisition.

    Science.gov (United States)

    Langus, Alan; Mehler, Jacques; Nespor, Marina

    2017-10-01

    Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Long-term temporal tracking of speech rate affects spoken-word recognition.

    Science.gov (United States)

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.

  2. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    Science.gov (United States)

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  3. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  4. Animated and Static Concept Maps Enhance Learning from Spoken Narration

    Science.gov (United States)

    Adesope, Olusola O.; Nesbit, John C.

    2013-01-01

    An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…

  5. Using the Corpus of Spoken Afrikaans to generate an Afrikaans ...

    African Journals Online (AJOL)

    This paper presents two chatbot systems, ALICE and. Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the. Corpus of Spoken Afrikaans (Korpus Gesproke Afrikaans) to retrain the ALICE chatbot system with human ...

  6. Autosegmental Representation of Epenthesis in the Spoken French ...

    African Journals Online (AJOL)

    Therefore, this paper examined vowel insertion in the spoken French of 50 Ijebu Undergraduate French Learners (IUFLs) in Selected Universities in South West of Nigeria. The data collection for this study was through tape-recording of participants' production of 30 sentences containing both French vowel and consonant ...

  7. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    Science.gov (United States)

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  8. A memory-based shallow parser for spoken Dutch

    NARCIS (Netherlands)

    Canisius, S.V.M.; van den Bosch, A.; Decadt, B.; Hoste, V.; De Pauw, G.

    2004-01-01

    We describe the development of a Dutch memory-based shallow parser. The availability of large treebanks for Dutch, such as the one provided by the Spoken Dutch Corpus, allows memory-based learners to be trained on examples of shallow parsing taken from the treebank, and act as a shallow parser after

  9. Oral and Literate Strategies in Spoken and Written Narratives.

    Science.gov (United States)

    Tannen, Deborah

    1982-01-01

    Discusses comparative analysis of spoken and written versions of a narrative to demonstrate that features which have been identified as characterizing oral discourse are also found in written discourse and that the written short story combines syntactic complexity expected in writing with features which create involvement expected in speaking.…

  10. Evaluation of Noisy Transcripts for Spoken Document Retrieval

    NARCIS (Netherlands)

    van der Werff, Laurens Bastiaan

    2012-01-01

    This thesis introduces a novel framework for the evaluation of Automatic Speech Recognition (ASR) transcripts in an Spoken Document Retrieval (SDR) context. The basic premise is that ASR transcripts must be evaluated by measuring the impact of noise in the transcripts on the search results of a

  11. Producing complex spoken numerals for time and space

    NARCIS (Netherlands)

    Meeuwissen, M.H.W.

    2004-01-01

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult

  12. Spoken Idiom Recognition: Meaning Retrieval and Word Expectancy

    Science.gov (United States)

    Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou

    2005-01-01

    This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…

  13. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    Science.gov (United States)

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  14. Lexical competition in non-native spoken-word recognition

    NARCIS (Netherlands)

    Weber, A.C.; Cutler, A.

    2004-01-01

    Six eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target

  15. In a Manner of Speaking: Assessing Frequent Spoken Figurative Idioms to Assist ESL/EFL Teachers

    Science.gov (United States)

    Grant, Lynn E.

    2007-01-01

    This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…

  16. Understanding Non-Restrictive "Which"-Clauses in Spoken English, Which Is Not an Easy Thing.

    Science.gov (United States)

    Tao, Hongyin; McCarthy, Michael J.

    2001-01-01

    Reexamines the notion of non-restrictive relative clauses (NRRCs) in light of spoken corpus evidence, based on analysis of 692 occurrences of non-restrictive "which"-clauses in British and American spoken English data. Reviews traditional conceptions of NRRCs and recent work on the broader notion of subordination in spoken grammar.…

  17. Talk About Mouth Speculums: Collocational Competence and Spoken Fluency in Non-Native English-Speaking University Lecturers

    DEFF Research Database (Denmark)

    Westbrook, Pete

    might exist between overall language proficiency, collocational competence and spoken fluency in non-native English-speaking university lecturers. The data came from 15 20-minute mini-lectures recorded between 2009 and 2011 for an English oral proficiency test for lecturers employed at the University......Despite the large body of research into formulaic language and fluency, there seems to be a lack of empirical evidence for how collocations, often considered a subset of formulaic language, might impact on fluency. To address this problem, this dissertation examined to what extent correlations...... fluency measures calculated for each lecturer. Initial findings across all lecturers showed no correlation between collocational competence and either overall proficiency or fluency. However, further analysis of lecturers by department revealed that possible correlations were hidden by variations...

  18. [Assessment of pragmatics from verbal spoken data].

    Science.gov (United States)

    Gallardo-Paúls, B

    2009-02-27

    Pragmatic assessment is usually complex, long and sophisticated, especially for professionals who lack specific linguistic education and interact with impaired speakers. To design a quick method of assessment that will provide a quick general evaluation of the pragmatic effectiveness of neurologically affected speakers. This first filter will allow us to decide whether a detailed analysis of the altered categories should follow. Our starting point was the PerLA (perception, language and aphasia) profile of pragmatic assessment designed for the comprehensive analysis of conversational data in clinical linguistics; this was then converted into a quick questionnaire. A quick protocol of pragmatic assessment is proposed and the results found in a group of children with attention deficit hyperactivity disorder are discussed.

  19. A Transcription Scheme for Languages Employing the Arabic Script Motivated by Speech Processing Application

    National Research Council Canada - National Science Library

    Ganjavi, Shadi; Georgiou, Panayiotis G; Narayanan, Shrikanth

    2004-01-01

    ... (The DARPA Babylon Program; Narayanan, 2003). In this paper, we discuss transcription systems needed for automated spoken language processing applications in Persian that uses the Arabic script for writing...

  20. The Role of Experience in Children's Discrimination of Unfamiliar Languages

    Directory of Open Access Journals (Sweden)

    Christine E Potter

    2015-10-01

    Full Text Available Five- and six-year-old children (n=160 participated in three studies designed to explore language discrimination. After an initial exposure period (during which children heard either an unfamiliar language, a familiar language, or music, children performed an ABX discrimination task involving two unfamiliar languages that were either similar (Spanish vs. Italian or different (Spanish vs. Mandarin. On each trial, participants heard two sentences spoken by two individuals, each spoken in an unfamiliar language. The pair was followed by a third sentence spoken in one of the two languages. Participants were asked to judge whether the third sentence was spoken by the first speaker or the second speaker. Across studies, both the difficulty of the discrimination contrast and the relation between exposure and test materials affected children’s performance. In particular, language discrimination performance was facilitated by an initial exposure to a different unfamiliar language, suggesting that experience can help tune children’s attention to the relevant features of novel languages.

  1. Word Formation below and above Little x: Evidence from Sign Language of the Netherlands

    Directory of Open Access Journals (Sweden)

    Inge Zwitserlood

    2004-01-01

    Full Text Available Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT, with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM, which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.

  2. The relationship between spoken English proficiency and participation in higher education, employment and income from two Australian censuses.

    Science.gov (United States)

    Blake, Helen L; Mcleod, Sharynne; Verdon, Sarah; Fuller, Gail

    2018-04-01

    Proficiency in the language of the country of residence has implications for an individual's level of education, employability, income and social integration. This paper explores the relationship between the spoken English proficiency of residents of Australia on census day and their educational level, employment and income to provide insight into multilingual speakers' ability to participate in Australia as an English-dominant society. Data presented are derived from two Australian censuses i.e. 2006 and 2011 of over 19 million people. The proportion of Australians who reported speaking a language other than English at home was 21.5% in the 2006 census and 23.2% in the 2011 census. Multilingual speakers who also spoke English very well were more likely to have post-graduate qualifications, full-time employment and high income than monolingual English-speaking Australians. However, multilingual speakers who reported speaking English not well were much less likely to have post-graduate qualifications or full-time employment than monolingual English-speaking Australians. These findings provide insight into the socioeconomic and educational profiles of multilingual speakers, which will inform the understanding of people such as speech-language pathologists who provide them with support. The results indicate spoken English proficiency may impact participation in Australian society. These findings challenge the "monolingual mindset" by demonstrating that outcomes for multilingual speakers in education, employment and income are higher than for monolingual speakers.

  3. Conceptualization of Man's Behavioral and Physical Characteristics as Animal Metaphors in the Spoken Discourse of Khezel People

    Directory of Open Access Journals (Sweden)

    Aliakbari, Mohammad

    2013-01-01

    Full Text Available Cognitive theory of metaphor has changed our understanding of metaphor as a figurative device to a matter of thought. It holds that metaphors are cognitively as well as culturally motivated. Despite having similar images in some languages, the culture-specific aspect of animal metaphors inspired the researchers to explore this area of metaphoric system in a local Kurdish variety to investigate how animal metaphors are reflected in spoken discourse. To achieve this objective, the authors collected and analyzed animal expressions adopted for praise and degradation of physical and behavioral characteristics in Khezeli dialect in Ilam, Iran. To create a representative corpus, the authors scrutinized spoken language and oral poetry of the dialect. The collected data indicate that more wild than domestic and more degrading than praising animal expressions are used for man's physical and behavioral characteristics. It is also confirmed that aspects of appearance, size, physical characteristics as well as body parts of animals are transferred to humans. Further, users' attitudes toward animals reflected themselves in their metaphors. Users were also found to have three categories of positive, positive/negative, and negative connotations for animal names. Despite the existence of similarities in the underlying patterns of metaphoric use in different languages, the research came to the conclusion that the types of animals used, their connotations and interpretations may be worlds apart and taking the meaning of one for another may lead to misunderstanding.

  4. The Plausibility of Tonal Evolution in the Malay Dialect Spoken in Thailand: Evidence from an Acoustic Study

    Directory of Open Access Journals (Sweden)

    Phanintra Teeranon

    2007-12-01

    Full Text Available The F0 values of vowels following voiceless consonants are higher than those of vowels following voiced consonants; high vowels have a higher F0 than low vowels. It has also been found that when high vowels follow voiced consonants, the F0 values decrease. In contrast, low vowels following voiceless consonants show increasing F0 values. In other words, the voicing of initial consonants has been found to counterbalance the intrinsic F0 values of high and low vowels (House and Fairbanks 1953, Lehiste and Peterson 1961, Lehiste 1970, Laver 1994, Teeranon 2006. To test whether these three findings are applicable to a disyllabic language, the F0 values of high and low vowels following voiceless and voiced consonants were studied in a Malay dialect of the Austronesian language family spoken in Pathumthani Province, Thailand. The data was collected from three male informants, aged 30-35. The Praat program was used for acoustic analysis. The findings revealed the influence of the voicing of initial consonants on the F0 of vowels to be greater than that of the influence of vowel height. Evidence from this acoustic study shows the plausibility for the Malay dialect spoken in Pathumthani to become a tonal language by the influence of initial consonants rather by the influence of the high-low vowel dimension.

  5. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  6. Morphosyntactic constructs in the development of spoken and written Hebrew text production.

    Science.gov (United States)

    Ravid, Dorit; Zilberbuch, Shoshana

    2003-05-01

    This study examined the distribution of two Hebrew nominal structures-N-N compounds and denominal adjectives-in spoken and written texts of two genres produced by 90 native-speaking participants in three age groups: eleven/twelve-year-olds (6th graders), sixteen/seventeen-year-olds (11th graders), and adults. The two constructions are later linguistic acquisitions, part of the profound lexical and syntactic changes that occur in language development during the school years. They are investigated in the context of learning how modality (speech vs. writing) and genre (biographical vs. expository texts) affect the production of continuous discourse. Participants were asked to speak and write about two topics, one biographical, describing the life of a public figure or of a friend; and another, expository, discussing one of ten topics such as the cinema, cats, or higher academic studies. N-N compounding was found to be the main device of complex subcategorization in Hebrew discourse, unrelated to genre. Denominal adjectives are a secondary subcategorizing device emerging only during the late teen years, a linguistic resource untapped until very late, more restricted to specific text types than N-N compounding, and characteristic of expository writing. Written texts were found to be denser than spoken texts lexically and syntactically as measured by number of novel N-N compounds and denominal adjectives per clause, and in older age groups this difference was found to be more pronounced. The paper contributes to our understanding of how the syntax/lexicon interface changes with age, modality and genre in the context of later language acquisition.

  7. Gray matter structure and morphosyntax within a spoken narrative in typically developing children and children with high functioning autism.

    Science.gov (United States)

    Mills, Brian D; Lai, Janie; Brown, Timothy T; Erhart, Matthew; Halgren, Eric; Reilly, Judy; Appelbaum, Mark; Moses, Pamela

    2013-01-01

    This study examined the relationship between magnetic resonance imaging (MRI)-based measures of gray matter structure and morphosyntax production in a spoken narrative in 17 typical children (TD) and 11 children with high functioning autism (HFA) between 6 and 13 years of age. In the TD group, cortical structure was related to narrative performance in the left inferior frontal gyrus (Broca's area), the right middle frontal sulcus, and the right inferior temporal sulcus. No associations were found in children with HFA. These findings suggest a systematic coupling between brain structure and spontaneous language in TD children and a disruption of these relationships in children with HFA.

  8. Open-Source Multi-Language Audio Database for Spoken Language Processing Applications

    Science.gov (United States)

    2012-12-01

    widespread use of internet acronyms such as “brb” and “ lol ” occurred occasionally in causal speech, implying the assimilation of today’s digital...such as “brb” and “ lol ” occurred occasionally in causal speech, implying the assimilation of today’s digital jargon in verbal communication. 4.3

  9. Detailed Phonetic Labeling of Multi-language Database for Spoken Language Processing Applications

    Science.gov (United States)

    2015-03-01

    frontend for representing speech information. This feature set presents a detailed look at one general flavor of time-frequency features, focusing on...The next step was to segment the signal into overlapping frames, using a Kaiser window with β of 6 (similar to a Hamming window). A 512 point FFT of...15, pp.209-243, New York : Wiley, 1963. [5] E. Zwicker and H. Fastl, Psychoacoustics, Facts and Models, Chapter 3, pp.25-28, Springer-Verlag 1990

  10. A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    Directory of Open Access Journals (Sweden)

    Michael Inspector

    Full Text Available OBJECTIVES: Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. EXPERIMENTAL DESIGN: Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i All words presented in a set flat monotonous pitch contour (ii Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii Each word had a different arbitrary pitch contour in each of its repetition. PRINCIPAL FINDINGS: The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41, temporal areas (BA 21 22 bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. CONCLUSIONS: Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.

  11. "We call it Springbok-German!": language contact in the German communities in South Africa.

    OpenAIRE

    Franke, Katharina

    2017-01-01

    Varieties of German are spoken all over the world, some of which have been maintained for prolonged periods of time. As a result, these transplanted varieties often show traces of the ongoing language contact as specific to their particular context. This thesis explores one such transplanted German language variety – Springbok- German – as spoken by a small subset of German Lutherans in South Africa. Specifically, this study takes as its focus eight rural German communities acr...

  12. CROSSROADS BETWEEN EDUCATION POLICIES AND INDIGENOUS LANGUAGES MAINTENANCE IN ARGENTINA

    Directory of Open Access Journals (Sweden)

    Ana Carolina Hecht

    2010-06-01

    Full Text Available Process of language shift is explained by many researchers since linguistic and anthropological perspectives. This area focuses on the correlations between social processes and changes in systems of use of a language. This article aims to address these issues. In particular, we analyze the links between educational-linguistic policy and the maintenance of the languages spoken in Argentina. In doing so, we explore this field taking into account the linguistic and educational policies implemented about indigenous languages in Argentina.

  13. Idea Sharing: Introducing English as an International Language (EIL) to Pre-Service Teachers in a "World Englishes" Course

    Science.gov (United States)

    Floris, Flora Debora

    2014-01-01

    Today, English is truly regarded as an international language. It is the most widely-learned and spoken second or foreign language in many countries. In recent years, the number of second and foreign language speakers has far exceeded the number of first language speakers of English. This dramatic change, many have argued, should be taken into…

  14. Assessment of Kivunjo as Second Language Learners' Competence ...

    African Journals Online (AJOL)

    The current study sought to assess the second language learners' competence at lexical, syntactic, morphological, comprehension and pragmatic levels. The language under study was Kivunjo dialect of Chagga, spoken in Kilimanjaro. The study involved 68 subjects who included 28 subjects who were dubbed 'the ...

  15. The role of foreign and indigenous languages in primary schools ...

    African Journals Online (AJOL)

    This article investigates the use of English and other African languages in Kenyan primary schools. English is a .... For a long time, the issue of the medium of instruction, in especially primary schools, has persisted in spite of .... mother tongue, they use this language for spoken classroom interaction in order to bring about.

  16. A grammar of Tadaksahak a northern Songhay language of Mali

    NARCIS (Netherlands)

    Christiansen-Bolli, Regula

    2010-01-01

    This dissertation is a descriptive grammar of the language Tadaksahak spoken by about 30,000 people living in the most eastern part of Mali. The four chapters of the book give 1. Information about the background of the group. 2. The phonological features of the language with the inventory of the

  17. A grammar of Lumun : a Kordofanian language of Sudan

    NARCIS (Netherlands)

    Smits, H.J.

    2017-01-01

    This dissertation investigates the grammar of Lumun, a Kordofanian language of the Talodi group, spoken in the Nuba Mountains of Sudan. The language has an estimated 15,000 speakers. Volume 1 offers a description of the segmental phonology and tone system. It also presents the nominal system of the

  18. A grammar of Sandawe : a Khoisan language of Tanzania

    NARCIS (Netherlands)

    Steeman, Sander

    2012-01-01

    This book presents a description of Sandawe, a Khoisan language spoken by approximately 60 000 speakers in Dodoma Region, Tanzania. The study presents an analysis of the phonology, morphology, and syntax of the language, as well as a sample of four texts. The data for this dissertation were gathered

  19. Theory of Mind and Language in Children with Cochlear Implants

    Science.gov (United States)

    Remmel, Ethan; Peters, Kimberly

    2009-01-01

    Thirty children with cochlear implants (CI children), age range 3-12 years, and 30 children with normal hearing (NH children), age range 4-6 years, were tested on theory of mind and language measures. The CI children showed little to no delay on either theory of mind, relative to the NH children, or spoken language, relative to hearing norms. The…

  20. Digital Divide: Low German and Other Minority Languages

    Science.gov (United States)

    Wiggers, Heiko

    2017-01-01

    This paper investigates the online presence of Low German, a minority language spoken in northern Germany, as well as several other European regional and minority languages. In particular, this article presents the results of two experiments, one involving "Wikipedia" and one involving "Twitter," that assess whether and to…

  1. Displacement of indigenous languages in families: A case study of ...

    African Journals Online (AJOL)

    This study examines the phenomenon of language displacement in the family domain. It looks at the languages that are spoken in the families of some educated Nigerians living in Gaborone, the capital city of Botswana. It has been observed that Nigerian families, especially those in diaspora, do not speak their mother ...

  2. Relative clause formation in the Bantu languages of South Africa ...

    African Journals Online (AJOL)

    This article discusses (verbal) relative clauses in the Bantu languages spoken in South Africa. The first part of the article offers a comparison of the relative clause formation strategies in Sotho, Tsonga, Nguni and Venda. An interesting difference between these language groups concerns the syntactic position and the ...

  3. Coaching Parents to Use Naturalistic Language and Communication Strategies

    Science.gov (United States)

    Akamoglu, Yusuf; Dinnebeil, Laurie

    2017-01-01

    Naturalistic language and communication strategies (i.e., naturalistic teaching strategies) refer to practices that are used to promote the child's language and communication skills either through verbal (e.g., spoken words) or nonverbal (e.g., gestures, signs) interactions between an adult (e.g., parent, teacher) and a child. Use of naturalistic…

  4. Japanese Non Resident Language Refresher Course; 210 Hour Course.

    Science.gov (United States)

    Defense Language Inst., Washington, DC.

    This military intelligence unit refresher course in Japanese is designed for 210 hours of audiolingual instruction. The materials, prepared by the Defense Language Institute, are intended for students with considerable intensive training in spoken and written Japanese who are preparing for a military language assignment. [Not available in hard…

  5. The Role of Writing in Classroom Second Language Acquisition.

    Science.gov (United States)

    Harklau, Linda

    2002-01-01

    Argues that writing should play a more prominent role in classroom-based studies of second language acquisition. Contends that an implicit emphasis on spoken language is the result of the historical development of the field of applied linguistics and parent disciplines of structuralist linguistics, linguistic anthropology, and child language…

  6. Introducing Nkami: A Forgotten Guang Language and People of ...

    African Journals Online (AJOL)

    Abstract. This paper introduces a group of people and an endangered language called Nkami. I discuss issues concerning the historical, geo-political, religious, socio-economic and linguistic backgrounds of the people. Among others, it is shown that Nkami is a South-. Guang language spoken by approximately 400 people ...

  7. A study of syllable codas in South African Sign Language

    African Journals Online (AJOL)

    Kate H

    A South African Sign Language Dictionary for Families with Young Deaf Children (SLED 2006) was used with permission ... Figure 1: Syllable structure of a CVC syllable in the word “bed”. In spoken languages .... often than not, there is a societal emphasis on 'fixing' a child's deafness and attempting to teach deaf children to ...

  8. Three languages from America in contact with Spanish

    NARCIS (Netherlands)

    Bakker, D.; Sakel, J.; Stolz, T.

    2012-01-01

    Long before Europeans reached the American shores for the first time, and forced their cultures upon the indigenous population, including their languages, a great many other languages were spoken on that continent. These dated back to the original discoverers of America, who probably came from the

  9. Globalization of an African Language: Truth or Fiction? | Dzahene ...

    African Journals Online (AJOL)

    In the quest to do away with every influence of colonialism including language imperialism, following the independence of several African countries, particularly in Sub-Saharan Africa, debates arose about the possibility of the adoption of Swahili as a common language for Africa since it was the most widely-spoken African ...

  10. Mutual intelligibility between closely related languages in Europe

    NARCIS (Netherlands)

    Gooskens, C.; Heuven, van V.J.J.P.; Golubović, J.; Schüppert, A.; Swarte, F.; Voigt, S.

    2017-01-01

    By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual

  11. Mutual intelligibility between closely related language in Europe.

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent; Golubovic, Jelena; Schüppert, Anja; Swarte, Femke; Voigt, Stefanie

    2018-01-01

    By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual

  12. The Influence of English on British Sign Language.

    Science.gov (United States)

    Sutton-Spence, Rachel

    1999-01-01

    Details the influence of English on British Sign Language (BSL) at the syntactic, morphological, lexical, idiomatic, and phonological levels. Shows how BSL uses loan translations, fingerspellings, and the use of mouth patterns derived from English language spoken words to include elements from English. (Author/VWL)

  13. A Shared Platform for Studying Second Language Acquisition

    Science.gov (United States)

    MacWhinney, Brian

    2017-01-01

    The study of second language acquisition (SLA) can benefit from the same process of datasharing that has proven effective in areas such as first language acquisition and aphasiology. Researchers can work together to construct a shared platform that combines data from spoken and written corpora, online tutors, and Web-based experimentation. Many of…

  14. Verification of the coupled fluid/solid transfer in a CASL grid-to-rod-fretting simulation : a technical brief on the analysis of convergence behavior and demonstration of software tools for verification.

    Energy Technology Data Exchange (ETDEWEB)

    Copps, Kevin D.

    2011-12-01

    For a CASL grid-to-rod fretting problem, Sandia's Percept software was used in conjunction with the Sierra Mechanics suite to analyze the convergence behavior of the data transfer from a fluid simulation to a solid mechanics simulation. An analytic function, with properties relatively close to numerically computed fluid approximations, was chosen to represent the pressure solution in the fluid domain. The analytic pressure was interpolated on a sequence of grids on the fluid domain, and transferred onto a separate sequence of grids in the solid domain. The error in the resulting pressure in the solid domain was measured with respect to the analytic pressure. The error in pressure approached zero as both the fluid and solids meshes were refined. The convergence of the transfer algorithm was limited by whether the source grid resolution was the same or finer than the target grid resolution. In addition, using a feature coverage analysis, we found gaps in the solid mechanics code verification test suite directly relevant to the prototype CASL GTRF simulations.

  15. The Knowledge and Perceptions of Prospective Teachers and Speech Language Therapists in Collaborative Language and Literacy Instruction

    Science.gov (United States)

    Wilson, Leanne; McNeill, Brigid; Gillon, Gail T.

    2015-01-01

    Successful collaboration among speech and language therapists (SLTs) and teachers fosters the creation of communication friendly classrooms that maximize children's spoken and written language learning. However, these groups of professionals may have insufficient opportunity in their professional study to develop the shared knowledge, perceptions…

  16. Analysis in Outline of Mam, a Mayan Language. Working Paper of the Language Behavior Research Laboratory, No. 25.

    Science.gov (United States)

    Canger, Una R.

    The primary goal of the present study is an exposition of the structure of Mam, a Mayan language of the Mamean group. Mam is the most widely spoken of the four Mamean languages, and has been roughly estimated to have a quarter million speakers located in the departments of Huehuetenango and San Marcos in Guatemala and in the state of Chiapas in…

  17. Learning a Minoritized Language in a Majority Language Context: Student Agency and the Creation of Micro-Immersion Contexts

    Science.gov (United States)

    DePalma, Renée

    2015-01-01

    This study investigates the self-reported experiences of students participating in a Galician language and culture course. Galician, a language historically spoken in northwestern Spain, has been losing ground with respect to Spanish, particularly in urban areas and among the younger generations. The research specifically focuses on informal…

  18. Translingualism and Second Language Acquisition: Language Ideologies of Gaelic Medium Education Teachers in a Linguistically Liminal Setting

    Science.gov (United States)

    Knipe, John

    2017-01-01

    Scottish Gaelic, among the nearly 7,000 languages spoken in the world today, is endangered. In the 1980s the Gaelic Medium Education (GME) movement emerged with an emphasis on teaching students all subjects via this ancient tongue with the hope of revitalizing the language. Concomitantly, many linguists have called for problematizing traditional…

  19. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    International Nuclear Information System (INIS)

    Brown, C.S.; Zhang, H.; Kucukboyaci, V.; Sung, Y.

    2016-01-01

    Highlights: • Best estimate plus uncertainty (BEPU) analyses of PWR core responses under main steam line break (MSLB) accident. • CASL’s coupled neutron transport/subchannel code VERA-CS. • Wilks’ nonparametric statistical method. • MDNBR 95/95 tolerance limit. - Abstract: VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was applied to simulate core behavior of a typical Westinghouse-designed 4-loop pressurized water reactor (PWR) with 17 × 17 fuel assemblies in response to two main steam line break (MSLB) accident scenarios initiated at hot zero power (HZP) at the end of the first fuel cycle with the most reactive rod cluster control assembly stuck out of the core. The reactor core boundary conditions at the most DNB limiting time step were determined by a system analysis code. The core inlet flow and temperature distributions were obtained from computational fluid dynamics (CFD) simulations. The two MSLB scenarios consisted of the high and low flow situations, where reactor coolant pumps either continue to operate with offsite power or do not continue to operate since offsite power is unavailable. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this demonstration of BEPU application, 59 full core simulations were performed for each accident scenario to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. A parametric goodness-of-fit approach was also applied to the results to obtain the MDNBR value at the 95/95 tolerance limit. Initial sensitivity analysis was performed with the 59 cases per accident scenario by use of Pearson correlation coefficients. The results show that this typical PWR core

  20. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C.S., E-mail: csbrown3@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, 2500 Stinson Drive, Raleigh, NC 27695-7909 (United States); Zhang, H., E-mail: Hongbin.Zhang@inl.gov [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3870 (United States); Kucukboyaci, V., E-mail: kucukbvn@westinghouse.com [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States); Sung, Y., E-mail: sungy@westinghouse.com [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States)

    2016-12-01

    Highlights: • Best estimate plus uncertainty (BEPU) analyses of PWR core responses under main steam line break (MSLB) accident. • CASL’s coupled neutron transport/subchannel code VERA-CS. • Wilks’ nonparametric statistical method. • MDNBR 95/95 tolerance limit. - Abstract: VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was applied to simulate core behavior of a typical Westinghouse-designed 4-loop pressurized water reactor (PWR) with 17 × 17 fuel assemblies in response to two main steam line break (MSLB) accident scenarios initiated at hot zero power (HZP) at the end of the first fuel cycle with the most reactive rod cluster control assembly stuck out of the core. The reactor core boundary conditions at the most DNB limiting time step were determined by a system analysis code. The core inlet flow and temperature distributions were obtained from computational fluid dynamics (CFD) simulations. The two MSLB scenarios consisted of the high and low flow situations, where reactor coolant pumps either continue to operate with offsite power or do not continue to operate since offsite power is unavailable. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this demonstration of BEPU application, 59 full core simulations were performed for each accident scenario to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. A parametric goodness-of-fit approach was also applied to the results to obtain the MDNBR value at the 95/95 tolerance limit. Initial sensitivity analysis was performed with the 59 cases per accident scenario by use of Pearson correlation coefficients. The results show that this typical PWR core

  1. TEACHING TURKISH AS SPOKEN IN TURKEY TO TURKIC SPEAKERS - TÜRK DİLLİLERE TÜRKİYE TÜRKÇESİ ÖĞRETİMİ NASIL OLMALIDIR?

    Directory of Open Access Journals (Sweden)

    Ali TAŞTEKİN

    2015-12-01

    Full Text Available Attributing different titles to the activity of teaching Turkish to non-native speakers is related to the perspective of those who conduct this activity. If Turkish Language teaching centres are sub-units of Schools of Foreign Languages and Departments of Foreign Languages of our Universities or teachers have a foreign language background, then the title “Teaching Turkish as a Foreign Language” is adopted and claimed to be universal. In determining success at teaching and learning, the psychological perception of the educational activity and the associational power of the words used are far more important factors than the teacher, students, educational environment and educational tools. For this reason, avoiding the negative connotations of the adjective “foreign” in the activity of teaching foreigners Turkish as spoken in Turkey would be beneficial. In order for the activity of Teaching Turkish as Spoken in Turkey to Turkic Speakers to be successful, it is crucial to dwell on the formal and contextual quality of the books written for this purpose. Almost none of the course books and supplementary books in the field of teaching Turkish to non-native speakers has taken Teaching Turkish as Spoken in Turkey to Turkic Speakers into consideration. The books written for the purpose of teaching Turkish to non-speakers should be examined thoroughly in terms of content and method and should be organized in accordance with the purpose and level of readiness of the target audience. Activities of Teaching Turkish as Spoken in Turkey to Turkic Speakers are still conducted at public and private primary and secondary schools and colleges as well as private courses by self-educated teachers who are trained within a master-apprentice relationship. Turkic populations who had long been parted by necessity have found the opportunity to reunite and turn towards common objectives after the dissolution of The Union of Soviet Socialist Republics. This recent

  2. Assessment of Dyslexia in the Urdu Language

    NARCIS (Netherlands)

    Haidry, Sana

    2017-01-01

    Urdu is spoken by more than 500 million people around the world but still is an under-researched language. The studies presented in this thesis focus on typical and poor literacy development in Urdu-speaking children during early reading acquisition. In the first study, we developed and validated a

  3. Word order in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.

    2012-01-01

    The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple

  4. The Linguist and the English Language.

    Science.gov (United States)

    Quirk, Randolph

    This collection of essays focuses on linguistic investigations of English, both spoken and written. The 12 chapters deal with Charles Dickens' linguistic criticism; eighteenth century prescriptivism; the relevance of language study to the study of Shakespeare; obstacles to the study of Old and Middle English; the contributions of R. G. Latham to…

  5. Hausa Language in Information and Communication Technology ...

    African Journals Online (AJOL)

    ... especially the human use of spoken or written words as a communication system. It is against this background that this study examined the use of Hausa Language in Information and Communication technology specifically as a medium of dissemination of information and or communication. The Information Manager Vol.

  6. The English Language of the Nigeria Police

    Science.gov (United States)

    Chinwe, Udo Victoria

    2015-01-01

    In the present day Nigeria, the quality of the English language spoken by Nigerians, is perceived to have been deteriorating and needs urgent attention. The proliferation of books and articles in the recent years can be seen as the native outcrop of its received attention and recognition as a matter of discourse. Evidently, every profession,…

  7. Corpus Linguistics: Discovering How We Use language.

    Science.gov (United States)

    Rosenthal, John

    2003-01-01

    Highlights the use of corpus linguistics--or the the study of language through the use of a large collection of naturally-occurring written and spoken texts. Discusses corpora with computers, applications of corpus linguistics, and the University of Pennsylvania's Linguistic data Consortium, which is conducting a speech study to support linguistic…

  8. Listening in first and second language

    NARCIS (Netherlands)

    Farrell, J.; Cutler, A.; Liontas, J.I.

    2018-01-01

    Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to

  9. Second Language Learners' Attitudes towards English Varieties

    Science.gov (United States)

    Zhang, Weimin; Hu, Guiling

    2008-01-01

    This pilot project investigates second language (L2) learners' attitudes towards three varieties of English: American (AmE), British (BrE) and Australian (AuE). A 69-word passage spoken by a female speaker of each variety was used. Participants were 30 Chinese students pursuing Masters or Doctoral degrees in the United States, who listened to each…

  10. Conceptual Representation of Actions in Sign Language

    Science.gov (United States)

    Dobel, Christian; Enriquez-Geppert, Stefanie; Hummert, Marja; Zwitserlood, Pienie; Bolte, Jens

    2011-01-01

    The idea that knowledge of events entails a universal spatial component, that is conceiving agents left of patients, was put to test by investigating native users of German sign language and native users of spoken German. Participants heard or saw event descriptions and had to illustrate the meaning of these events by means of drawing or arranging…

  11. Spoken Document Retrieval Based on Confusion Network with Syllable Fragments

    Directory of Open Access Journals (Sweden)

    Zhang Lei

    2012-11-01

    Full Text Available This paper addresses the problem of spoken document retrieval under noisy conditions by incorporating sound selection of a basic unit and an output form of a speech recognition system. Syllable fragment is combined with a confusion network in a spoken document retrieval task. After selecting an appropriate syllable fragment, a lattice is converted into a confusion network that is able to minimize the word error rate instead of maximizing the whole sentence recognition rate. A vector space model is adopted in the retrieval task where tf-idf weights are derived from the posterior probability. The confusion network with syllable fragments is able to improve the mean of average precision (MAP score by 0.342 and 0.066 over one-best scheme and the lattice.

  12. Play to Learn: Self-Directed Home Language Literacy Acquisition through Online Games

    Science.gov (United States)

    Eisenchlas, Susana A.; Schalley, Andrea C.; Moyes, Gordon

    2016-01-01

    Home language literacy education in Australia has been pursued predominantly through Community Language Schools. At present, some 1,000 of these, attended by over 100,000 school-age children, cater for 69 of the over 300 languages spoken in Australia. Despite good intentions, these schools face a number of challenges. For instance, children may…

  13. Morphosyntactic correctness of written language production in adults with moderate to severe congenital hearing loss

    NARCIS (Netherlands)

    Huysmans, E.; de Jong, J.; Festen, J.M.; Coene, M.M.R.; Goverts, S.T.

    Objective To examine whether moderate to severe congenital hearing loss (MSCHL) leads to persistent morphosyntactic problems in the written language production of adults, as it does in their spoken language production. Design Samples of written language in Dutch were analysed for morphosyntactic

  14. Morphosyntactic correctness of written language production in adults with moderate to severe congenital hearing loss

    NARCIS (Netherlands)

    Huysmans, Elke; de Jong, Jan; Festen, Joost M.; Coene, Martine M.R.; Goverts, S. Theo

    2017-01-01

    Objective To examine whether moderate to severe congenital hearing loss (MSCHL) leads to persistent morphosyntactic problems in the written language production of adults, as it does in their spoken language production. Design Samples of written language in Dutch were analysed for morphosyntactic

  15. The Link between Form and Meaning in American Sign Language: Lexical Processing Effects

    Science.gov (United States)

    Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella

    2009-01-01

    Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of…

  16. Language and Literacy Development of Deaf and Hard-of-Hearing Children: Successes and Challenges

    Science.gov (United States)

    Lederberg, Amy R.; Schick, Brenda; Spencer, Patricia E.

    2013-01-01

    Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to…

  17. Language Education Policies and Inequality in Africa: Cross-National Empirical Evidence

    Science.gov (United States)

    Coyne, Gary

    2015-01-01

    This article examines the relationship between inequality and education through the lens of colonial language education policies in African primary and secondary school curricula. The languages of former colonizers almost always occupy important places in society, yet they are not widely spoken as first languages, meaning that most people depend…

  18. The Ecology of Language in Classrooms at a University in Eastern Ukraine

    Science.gov (United States)

    Tarnopolsky, Oleg B.; Goodman, Bridget A.

    2014-01-01

    Using an ecology of language framework, the purpose of this study was to examine the degree to which English as a medium of instruction (EMI) at a private university in eastern Ukraine allows for the use of Ukrainian, the state language, or Russian, the predominantly spoken language, in large cities in eastern Ukraine. Uses of English and Russian…

  19. Documentation and Revitalization of the Zhuang Language and Culture of Southwestern China through Linguistic Fieldwork

    Science.gov (United States)

    Bodomo, Adams

    2010-01-01

    This article outlines innovative strategies, methods, and techniques for the documentation and revitalization of "Zhuang" language and culture through linguistic fieldwork. Zhuang, a Tai-Kadai language spoken mainly in the rural areas of the Guangxi Zhuang Autonomous Region of southwestern China, is the largest minority language in…

  20. Beyond languages, beyond modalities: transforming the study of semiotic repertoires : Introduction

    NARCIS (Netherlands)

    Spotti, Max

    2017-01-01

    This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the

  1. Mother Tongue versus Arabic: The Post-Independence Eritrean Language Policy Debate

    Science.gov (United States)

    Mohammad, Abdulkader Saleh

    2016-01-01

    This paper analyses the controversial discourses around the significance of the Arabic language in Eritrea. It challenges the arguments of the government and some scholars, who claim that the Arabic language is alien to Eritrean society. They argue that it was introduced as an official language under British rule and is only spoken by the Rashaida…

  2. A real time study of contact-induced language change in Frisian relative pronouns

    NARCIS (Netherlands)

    Dijkstra, J.E.; Heeringa, W.J.; Yilmaz, E.; van den Heuvel, H.; van Leeuwen, D.; Van de Velde, Hans; Babatsouli, E.

    2017-01-01

    Many minority languages are subject to linguistic interferences from a more prestigious language, for example the country’s majority language. This is also the case with (West-)Frisian, spoken in the province of Fryslân in the north of the Netherlands. The current real-time study investigates the

  3. The Development of Language and Reading Skills in Children with Down's Syndrome.

    Science.gov (United States)

    Buckley, Sue; And Others

    The book summarizes the current state of knowledge concerning language development in children with Down Syndrome (DS). The first chapter reviews language development in normal children, noting such stages as gestures, first sounds, development of understanding, first spoken words, and the two-word stage. The next chapter examines language skills…

  4. 25 CFR 39.131 - What is a Language Development Program?

    Science.gov (United States)

    2010-04-01

    ...) Are not proficient in spoken or written English; (b) Are not proficient in any language; (c) Are... 25 Indians 1 2010-04-01 2010-04-01 false What is a Language Development Program? 39.131 Section 39... EQUALIZATION PROGRAM Indian School Equalization Formula Language Development Programs § 39.131 What is a...

  5. The Influence of Teacher Power on English Language Learners' Self-Perceptions of Learner Empowerment

    Science.gov (United States)

    Diaz, Abel; Cochran, Kathryn; Karlin, Nancy

    2016-01-01

    English language learners (ELL) are students with a primary language spoken other than English enrolled in U.S. educational settings. As ELL students take on the challenges of learning English and U.S. culture, they must also learn academic content. The expectation to succeed academically in a foreign culture and language, while learning to speak…

  6. Music and early language acquisition.

    Science.gov (United States)

    Brandt, Anthony; Gebrian, Molly; Slevc, L Robert

    2012-01-01

    Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability - one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development.

  7. Music and Early Language Acquisition

    Science.gov (United States)

    Brandt, Anthony; Gebrian, Molly; Slevc, L. Robert

    2012-01-01

    Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability – one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development. PMID:22973254

  8. Talker and background noise specificity in spoken word recognition memory

    Directory of Open Access Journals (Sweden)

    Angela Cooper

    2017-11-01

    Full Text Available Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a function of consistency versus variation in the talker’s voice (talker condition and background noise (noise condition using a delayed recognition memory paradigm. The speech and noise signals were spectrally-separated, such that changes in a simultaneously presented non-speech signal (background noise from exposure to test would not be accompanied by concomitant changes in the target speech signal. The results revealed that listeners can encode both signal-intrinsic talker and signal-extrinsic noise information into integrated cognitive representations, critically even when the two auditory streams are spectrally non-overlapping. However, the extent to which extra-linguistic episodic information is encoded alongside linguistic information appears to be modulated by syllabic characteristics, with specificity effects found only for monosyllabic items. These findings suggest that encoding and retrieval of episodic information during spoken word processing may be modulated by lexical characteristics.

  9. INDIVIDUAL ACCOUNTABILITY IN COOPERATIVE LEARNING: MORE OPPORTUNITIES TO PRODUCE SPOKEN ENGLISH

    Directory of Open Access Journals (Sweden)

    Puji Astuti

    2017-05-01

    Full Text Available The contribution of cooperative learning (CL in promoting second and foreign language learning has been widely acknowledged. Little scholarly attention, however, has been given to revealing how this teaching method works and promotes learners’ improved communicative competence. This qualitative case study explores the important role that individual accountability in CL plays in giving English as a Foreign Language (EFL learners in Indonesia the opportunity to use the target language of English. While individual accountability is a principle of and one of the activities in CL, it is currently under studied, thus little is known about how it enhances EFL learning. This study aims to address this gap by conducting a constructivist grounded theory analysis on participant observation, in-depth interview, and document analysis data drawn from two secondary school EFL teachers, 77 students in the observed classrooms, and four focal students. The analysis shows that through individual accountability in CL, the EFL learners had opportunities to use the target language, which may have contributed to the attainment of communicative competence—the goal of the EFL instruction. More specifically, compared to the use of conventional group work in the observed classrooms, through the activities of individual accountability in CL, i.e., performances and peer interaction, the EFL learners had more opportunities to use spoken English. The present study recommends that teachers, especially those new to CL, follow the preset procedure of selected CL instructional strategies or structures in order to recognize the activities within individual accountability in CL and understand how these activities benefit students.

  10. From Speech to Writing: Some Evidence on the Relationship between Oracy and Literacy from the Bristol Study "Language at Home and at School."

    Science.gov (United States)

    Wells, Gordon; And Others

    Prepared as part of a British project investigating children's language at home and at school, the study described in this paper centered on an examination of the spoken and written narrative texts produced by children to determine (1) the relationship between spoken and written texts; (2) the differences, if any, in the production processes used…

  11. School of Roman Languages

    Directory of Open Access Journals (Sweden)

    Nikolai V. Ivanov

    2014-01-01

    Full Text Available Department of Romance languages (Italian, Portuguese and Latin named after professor T.Z. Cherdantseva was created November 26, 2002. The main task of the department is a professionally-oriented teaching of Italian and Portuguese (both as first and a second language for all faculties of MGIMO-University in all majors and minors on both the undergraduate and graduate levels. Special attention is paid to teaching courses on socio-political, economic and legal translation. Teaching begins with a zero level, and by the end of training a student reaches a level of high proficiency. In accordance with the agreements with ICA (Portugal a lecturer from the Institute Camöes (Portugal João Mendonça conducts classes on spoken language, listening and abstracting. He also lectures on the history and culture of Portugal and co-authored (with G. Petrova a textbook "Portuguese for Beginners".

  12. Intelligibility of American English vowels and consonants spoken by international students in the United States.

    Science.gov (United States)

    Jin, Su-Hyun; Liu, Chang

    2014-04-01

    PURPOSE The purpose of this study was to examine the intelligibility of English consonants and vowels produced by Chinese-native (CN), and Korean-native (KN) students enrolled in American universities. METHOD 16 English-native (EN), 32 CN, and 32 KN speakers participated in this study. The intelligibility of 16 American English consonants and 16 vowels spoken by native and nonnative speakers of English was evaluated by EN listeners. All nonnative speakers also completed a survey of their language backgrounds. RESULTS Although the intelligibility of consonants and diphthongs for nonnative speakers was comparable to that of native speakers, the intelligibility of monophthongs was significantly lower for CN and KN speakers than for EN speakers. Sociolinguistic factors such as the age of arrival in the United States and daily use of English, as well as a linguistic factor, difference in vowel space between native (L1) and nonnative (L2) language, partially contributed to vowel intelligibility for CN and KN groups. There was no significant correlation between the length of U.S. residency and phoneme intelligibility. CONCLUSION Results indicated that the major difficulty in phonemic production in English for Chinese and Korean speakers is with vowels rather than consonants. This might be useful for developing training methods to improve English intelligibility for foreign students in the United States.

  13. A re-examination of (the same using data from spoken english A re-examination of (the same using data from spoken english

    Directory of Open Access Journals (Sweden)

    Jean Wong

    2008-04-01

    Full Text Available This paper reports on a qualitative discourse analysis of 290 tokens of (the same occurring in spoken American English. Our study of these naturally occurring tokens extends and elaborates on the analysis of this expression that was proposed by Halliday and Hasan (l976. We also review other prior research on (the same in our attempt to provide data-based answers to the following three questions: (1 under what conditions is the definite article the obligatory or optional with same? (2 what are the head nouns that typically follow same and why is there sometimes no head noun? (3 what type(s of cohesive relationships can (the same signal in spoken English discourse? Finally, we explore some typical pedagogical treatments of (the same in current ESL/EFL textbooks and reference grammars. Then we make our own suggestions regarding how teachers of English as a second or foreign language might go about presenting this useful expression to their learners. Este estudo apresenta uma análise qualitativa do discurso de 290 ocorrências de (the same no Inglês Americano falado. Nosso estudo sobre essas ocorrências naturais amplia e elabora a análise desta expressão que foi proposta por Halliday e Hassan (1976. Também revisamos investigações posteriores sobre (the same com o intuito de fornecer respostas fundamentadas em um banco de dados para as três seguintes perguntas: (1 em quais condições o artigo definido (the é obrigatório ou opcional juntamente a same? (2 quais são os principais substantivos que tipicamente seguem same e por que, às vezes, não há substantivo? (3 que tipo(s de relações coesivas pode (the same indicar no discurso do Inglês falado? Finalmente, exploramos alguns tratamentos pedagógicos típicos de (the same nos atuais livros-texto e gramáticas de Inglês – L2/LE. Em seguida, sugerimos como os professores de Inglês, como segunda língua ou língua estrangeira, poderiam ensinar essa útil expressão para seus

  14. The Differences Between Men And Women Language Styles In Writing Twitter Updates

    OpenAIRE

    FATIN, MARSHELINA

    2014-01-01

    Fatin, Marshelina. 2013. The Differences between Men and Women LanguageStyles in Writing Twitter Updates. Study Program of English, UniversitasBrawijaya. Supervisor: Isti Purwaningtyas; Co-supervisor: Muhammad Rozin.Keywords: Twitter, Twitter updates, Language style, Men language, Womenlanguage. The language which is used by people has so many differences. The differences itself are associated with men and women which belong to gender. If there are differences in spoken language, written lang...

  15. Phase Transition in a Sexual Age-Structured Model of Learning Foreign Languages

    Science.gov (United States)

    Schwämmle, V.

    The understanding of language competition helps us to predict extinction and survival of languages spoken by minorities. A simple agent-based model of a sexual population, based on the Penna model, is built in order to find out under which circumstances one language dominates other ones. This model considers that only young people learn foreign languages. The simulations show a first order phase transition of the ratio between the number of speakers of different languages with the mutation rate as control parameter.

  16. Language Planning for the 21st Century: Revisiting Bilingual Language Policy for Deaf Children

    NARCIS (Netherlands)

    Knoors, H.E.T.; Marschark, M.

    2012-01-01

    For over 25 years in some countries and more recently in others, bilingual education involving sign language and the written/spoken vernacular has been considered an essential educational intervention for deaf children. With the recent growth in universal newborn hearing screening and technological

  17. Schooling Transnational Speakers of the Societal Language: Language Variation Policy-Making in Madrid and Toronto

    Science.gov (United States)

    Schecter, Sandra R.; García Parejo, Isabel; Ambadiang, Théophile; James, Carl E.

    2014-01-01

    A cross-national comparative study in Toronto, Ontario, Canada and Madrid, Spain examines educational policies and practices that target immigrant students for whom the language variety normally spoken in the host country represents a second dialect. Policy contexts and schooling environments of the two urban centres were analyzed to gain deeper…

  18. The representation of language within language : A syntactico-pragmatic typology of direct speech

    NARCIS (Netherlands)

    de Vries, M.

    The recursive phenomenon of direct speech (quotation) comes in many different forms, and it is arguably an important and widely used ingredient of both spoken and written language. This article builds on (and provides indirect support for) the idea that quotations are to be defined pragmatically as

  19. DIFFERENCES BETWEEN AMERICAN SIGN LANGUAGE (ASL AND BRITISH SIGN LANGUAGE (BSL

    Directory of Open Access Journals (Sweden)

    Zora JACHOVA

    2008-06-01

    Full Text Available In the communication of deaf people between them­selves and hearing people there are three ba­sic as­pects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The ges­ture and the move­ments pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them pho­nemes. He created three main phoneme catego­ries: hand position, location and movement.Sign languages as spoken languages have back­ground from the distant past. They developed par­allel with the development of spoken language and undertook many historical changes. Therefore, to­day they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.

  20. Language Impairments in the Development of Sign: Do They Reside in a Specific Modality or Are They Modality-Independent Deficits?

    Science.gov (United States)

    Woll, Bencie; Morgan, Gary

    2012-01-01

    Various theories of developmental language impairments have sought to explain these impairments in modality-specific ways--for example, that the language deficits in SLI or Down syndrome arise from impairments in auditory processing. Studies of signers with language impairments, especially those who are bilingual in a spoken language as well as a…