WorldWideScience

Sample records for spoken language interaction

  1. Does textual feedback hinder spoken interaction in natural language?

    Science.gov (United States)

    Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois

    2010-01-01

    The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.

  2. Teaching the Spoken Language.

    Science.gov (United States)

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  3. Spoken language interaction with model uncertainty: an adaptive human-robot interaction system

    Science.gov (United States)

    Doshi, Finale; Roy, Nicholas

    2008-12-01

    Spoken language is one of the most intuitive forms of interaction between humans and agents. Unfortunately, agents that interact with people using natural language often experience communication errors and do not correctly understand the user's intentions. Recent systems have successfully used probabilistic models of speech, language and user behaviour to generate robust dialogue performance in the presence of noisy speech recognition and ambiguous language choices, but decisions made using these probabilistic models are still prone to errors owing to the complexity of acquiring and maintaining a complete model of human language and behaviour. In this paper, a decision-theoretic model for human-robot interaction using natural language is described. The algorithm is based on the Partially Observable Markov Decision Process (POMDP), which allows agents to choose actions that are robust not only to uncertainty from noisy or ambiguous speech recognition but also unknown user models. Like most dialogue systems, a POMDP is defined by a large number of parameters that may be difficult to specify a priori from domain knowledge, and learning these parameters from the user may require an unacceptably long training period. An extension to the POMDP model is described that allows the agent to acquire a linguistic model of the user online, including new vocabulary and word choice preferences. The approach not only avoids a training period of constant questioning as the agent learns, but also allows the agent actively to query for additional information when its uncertainty suggests a high risk of mistakes. The approach is demonstrated both in simulation and on a natural language interaction system for a robotic wheelchair application.

  4. Spoken Language Understanding Software for Language Learning

    Directory of Open Access Journals (Sweden)

    Hassan Alam

    2008-04-01

    Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.

  5. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    Science.gov (United States)

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  6. Effects of Tasks on Spoken Interaction and Motivation in English Language Learners

    Science.gov (United States)

    Carrero Pérez, Nubia Patricia

    2016-01-01

    Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…

  7. Native language, spoken language, translation and trade

    OpenAIRE

    Jacques Melitz; Farid Toubal

    2012-01-01

    We construct new series for common native language and common spoken language for 195 countries, which we use together with series for common official language and linguis-tic proximity in order to draw inferences about (1) the aggregate impact of all linguistic factors on bilateral trade, (2) whether the linguistic influences come from ethnicity and trust or ease of communication, and (3) in so far they come from ease of communication, to what extent trans-lation and interpreters play a role...

  8. Talk or Chat? Chatroom and Spoken Interaction in a Language Classroom

    Science.gov (United States)

    Hamano-Bunce, Douglas

    2011-01-01

    This paper describes a study comparing chatroom and face-to-face oral interaction for the purposes of language learning in a tertiary classroom in the United Arab Emirates. It uses transcripts analysed for Language Related Episodes, collaborative dialogues, thought to be externally observable examples of noticing in action. The analysis is…

  9. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  10. Spoken language corpora for the nine official African languages of ...

    African Journals Online (AJOL)

    Spoken language corpora for the nine official African languages of South Africa. Jens Allwood, AP Hendrikse. Abstract. In this paper we give an outline of a corpus planning project which aims to develop linguistic resources for the nine official African languages of South Africa in the form of corpora, more specifically spoken ...

  11. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  12. Direction Asymmetries in Spoken and Signed Language Interpreting

    Science.gov (United States)

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  13. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  14. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  15. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Assessing spoken-language educational interpreting: Measuring up and measuring right. Lenelle Foster, Adriaan Cupido. Abstract. This article, primarily, presents a critical evaluation of the development and refinement of the assessment instrument used to assess formally the spoken-language educational interpreters at ...

  16. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... sound of that language. These language-specific properties can be exploited to identify a spoken language reliably. Automatic language identification has emerged as a prominent research area in. Indian languages processing. People from different regions of India speak around 800 different languages.

  17. Automatic disambiguation of morphosyntax in spoken language corpora

    OpenAIRE

    Parisse , Christophe; Le Normand , Marie-Thérèse

    2000-01-01

    International audience; The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automa...

  18. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  19. "Visual" Cortex Responds to Spoken Language in Blind Children.

    Science.gov (United States)

    Bedny, Marina; Richardson, Hilary; Saxe, Rebecca

    2015-08-19

    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.

  20. Development of a spoken language identification system for South African languages

    CSIR Research Space (South Africa)

    Peché, M

    2009-12-01

    Full Text Available This article introduces the first Spoken Language Identification system developed to distinguish among all eleven of South Africa’s official languages. The PPR-LM (Parallel Phoneme Recognition followed by Language Modeling) architecture...

  1. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Kate H

    assessment instrument used to assess formally the spoken-language educational interpreters at. Stellenbosch University (SU). Research ..... Is the interpreter suited to the module? Is the interpreter easier to follow? Technical. Microphone technique. Lag. Completeness. Language use. Vocabulary. Role. Personal Objectives ...

  2. IMPACT ON THE INDIGENOUS LANGUAGES SPOKEN IN NIGERIA ...

    African Journals Online (AJOL)

    This article examines the impact of the hegemony of English, as a common lingua franca, referred to as a global language, on the indigenous languages spoken in Nigeria. Since English, through the British political imperialism and because of the economic supremacy of English dominated countries, has assumed the ...

  3. Neural stages of spoken, written, and signed word processing in beginning second language learners.

    Science.gov (United States)

    Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric

    2013-01-01

    WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.

  4. Porting a spoken language identification systen to a new environment.

    CSIR Research Space (South Africa)

    Peche, M

    2008-11-01

    Full Text Available the carefully selected training data used to construct the system initially. The authors investigated the process of porting a Spoken Language Identification (S-LID) system to a new environment and describe methods to prepare it for more effective use...

  5. Automatic disambiguation of morphosyntax in spoken language corpora.

    Science.gov (United States)

    Parisse, C; Le Normand, M T

    2000-08-01

    The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automatic disambiguation of lexical tags. The tool presented here (POST) can tag and disambiguate a large text in a few seconds. This tool complements systems dealing with language transcription and suggests further theoretical developments in the assessment of the status of morphosyntax in spoken language corpora. The program currently works for French and English, but it can be easily adapted for use with other languages. The analysis and computation of a corpus produced by normal French children 2-4 years of age, as well as of a sample corpus produced by French SLI children, are given as examples.

  6. The Child's Path to Spoken Language.

    Science.gov (United States)

    Locke, John L.

    A major synthesis of the latest research on early language acquisition, this book explores what gives infants the remarkable capacity to progress from babbling to meaningful sentences, and what inclines a child to speak. The book examines the neurological, perceptual, social, and linguistic aspects of language acquisition in young children, from…

  7. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  8. Phonotactic spoken language identification with limited training data

    CSIR Research Space (South Africa)

    Peche, M

    2007-08-01

    Full Text Available rates when no Japanese acoustic models are constructed. An increasing amount of Japanese training data is used to train the language classifier of an English-only (E), an English-French (EF), and an English-French-Portuguese PPR system. ple.... Experimental design 3.1. Corpora Because of their role as world languages that are widely spoken in Africa, our initial LID system was designed to distinguish between English, French and Portuguese. We therefore trained phone recognizers and language...

  9. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    Science.gov (United States)

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  10. Spoken English Language Development Among Native Signing Children With Cochlear Implants

    OpenAIRE

    Davidson, Kathryn; Lillo-Martin, Diane; Chen Pichler, Deborah

    2013-01-01

    Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken En...

  11. Spoken language interface for a network management system

    Science.gov (United States)

    Remington, Robert J.

    1999-11-01

    Leaders within the Information Technology (IT) industry are expressing a general concern that the products used to deliver and manage today's communications network capabilities require far too much effort to learn and to use, even by highly skilled and increasingly scarce support personnel. The usability of network management systems must be significantly improved if they are to deliver the performance and quality of service needed to meet the ever-increasing demand for new Internet-based information and services. Fortunately, recent advances in spoken language (SL) interface technologies show promise for significantly improving the usability of most interactive IT applications, including network management systems. The emerging SL interfaces will allow users to communicate with IT applications through words and phases -- our most familiar form of everyday communication. Recent advancements in SL technologies have resulted in new commercial products that are being operationally deployed at an increasing rate. The present paper describes a project aimed at the application of new SL interface technology for improving the usability of an advanced network management system. It describes several SL interface features that are being incorporated within an existing system with a modern graphical user interface (GUI), including 3-D visualization of network topology and network performance data. The rationale for using these SL interface features to augment existing user interfaces is presented, along with selected task scenarios to provide insight into how a SL interface will simplify the operator's task and enhance overall system usability.

  12. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  13. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    Science.gov (United States)

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  14. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  15. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  16. Spoken Language Production in Young Adults: Examining Syntactic Complexity.

    Science.gov (United States)

    Nippold, Marilyn A; Frantz-Kaspar, Megan W; Vigeland, Laura M

    2017-05-24

    In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language impairment. Forty adults (mean age = 22 years, 10 months) with typical language development participated in an interview that consisted of 3 speaking tasks: a general conversation about common, everyday topics; a narrative retelling task that involved fables; and a question-and-answer, critical-thinking task about the fables. Each speaker's interview was audio-recorded, transcribed, broken into communication units, coded for main and subordinate clauses, entered into Systematic Analysis of Language Transcripts (Miller, Iglesias, & Nockerts, 2004), and analyzed for mean length of communication unit and clausal density. Both the narrative and critical-thinking tasks elicited significantly greater syntactic complexity than the conversational task. It was also found that syntactic complexity was significantly greater during the narrative task than the critical-thinking task. Syntactic complexity was best revealed by a narrative task that involved fables. The study offers benchmarks for language development during early adulthood.

  17. Native Language Spoken as a Risk Marker for Tooth Decay.

    Science.gov (United States)

    Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M

    2015-01-01

    The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.

  18. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  19. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  20. Give and take: syntactic priming during spoken language comprehension.

    Science.gov (United States)

    Thothathiri, Malathi; Snedeker, Jesse

    2008-07-01

    Syntactic priming during language production is pervasive and well-studied. Hearing, reading, speaking or writing a sentence with a given structure increases the probability of subsequently producing the same structure, regardless of whether the prime and target share lexical content. In contrast, syntactic priming during comprehension has proven more elusive, fueling claims that comprehension is less dependent on general syntactic representations and more dependent on lexical knowledge. In three experiments we explored syntactic priming during spoken language comprehension. Participants acted out double-object (DO) or prepositional-object (PO) dative sentences while their eye movements were recorded. Prime sentences used different verbs and nouns than the target sentences. In target sentences, the onset of the direct-object noun was consistent with both an animate recipient and an inanimate theme, creating a temporary ambiguity in the argument structure of the verb (DO e.g., Show the horse the book; PO e.g., Show the horn to the dog). We measured the difference in looks to the potential recipient and the potential theme during the ambiguous interval. In all experiments, participants who heard DO primes showed a greater preference for the recipient over the theme than those who heard PO primes, demonstrating across-verb priming during online language comprehension. These results accord with priming found in production studies, indicating a role for abstract structural information during comprehension as well as production.

  1. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    Science.gov (United States)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  2. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.

    2016-01-01

    BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  3. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2016-01-01

    Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  4. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  5. Symbolic gestures and spoken language are processed by a common neural system.

    Science.gov (United States)

    Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R

    2009-12-08

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.

  6. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    Science.gov (United States)

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  7. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  8. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  9. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  10. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  11. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  12. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  15. What Comes First, What Comes Next: Information Packaging in Written and Spoken Language

    Directory of Open Access Journals (Sweden)

    Vladislav Smolka

    2017-07-01

    Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.

  16. THE IMPLEMENTATION OF COMMUNICATIVE LANGUAGE TEACHING (CLT TO TEACH SPOKEN RECOUNTS IN SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Eri Rusnawati

    2016-10-01

    Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa

  17. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    Science.gov (United States)

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  18. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    Science.gov (United States)

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  19. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  20. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    Science.gov (United States)

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  1. Interaction in Spoken Word Recognition Models: Feedback Helps

    Science.gov (United States)

    Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593

  2. Interaction in Spoken Word Recognition Models: Feedback Helps

    Directory of Open Access Journals (Sweden)

    James S. Magnuson

    2018-04-01

    Full Text Available Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  3. ORIGINAL ARTICLES How do doctors learn the spoken language of ...

    African Journals Online (AJOL)

    2009-07-01

    Jul 1, 2009 ... correct language that has been acquired through listening. The Brewsters17 suggest an 'immersion experience' by living with speakers of the language. Ellis included several of their tools, such as loop tapes, as being useful in a consultation when learning a language.15 Others disagree with a purely.

  4. Comparing spoken language treatments for minimally verbal preschoolers with autism spectrum disorders.

    Science.gov (United States)

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-02-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in each group achieved benchmarks for the first stage of functional spoken language development, as defined by Tager-Flusberg et al. (J Speech Lang Hear Res, 52: 643-652, 2009). Analyses of moderators of treatment suggest that joint attention moderates response to both treatments, and children with better receptive language pre-treatment do better with the naturalistic method, while those with lower receptive language show better response to the discrete trial treatment. The implications of these findings are discussed.

  5. Processing Relationships Between Language-Being-Spoken and Other Speech Dimensions in Monolingual and Bilingual Listeners.

    Science.gov (United States)

    Vaughn, Charlotte R; Bradlow, Ann R

    2017-12-01

    While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3). Results demonstrate that language-being-spoken is integrated in processing with each of the other dimensions tested, and that these processing dependencies seem to be independent of listeners' bilingual status or experience with the languages tested. Moreover, the data reveal processing interference asymmetries, suggesting a processing hierarchy for indexical, non-linguistic speech features.

  6. Factors Influencing Verbal Intelligence and Spoken Language in Children with Phenylketonuria.

    Science.gov (United States)

    Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre

    2015-05-01

    To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.

  7. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment.

    Science.gov (United States)

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. In Experiment 1, 69 children with TLD (7-10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7-12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.

  8. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    Science.gov (United States)

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  9. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  10. Children reading spoken words: interactions between vocabulary and orthographic expectancy.

    Science.gov (United States)

    Wegener, Signy; Wang, Hua-Chen; de Lissa, Peter; Robidoux, Serje; Nation, Kate; Castles, Anne

    2017-07-12

    There is an established association between children's oral vocabulary and their word reading but its basis is not well understood. Here, we present evidence from eye movements for a novel mechanism underlying this association. Two groups of 18 Grade 4 children received oral vocabulary training on one set of 16 novel words (e.g., 'nesh', 'coib'), but no training on another set. The words were assigned spellings that were either predictable from phonology (e.g., nesh) or unpredictable (e.g., koyb). These were subsequently shown in print, embedded in sentences. Reading times were shorter for orally familiar than unfamiliar items, and for words with predictable than unpredictable spellings but, importantly, there was an interaction between the two: children demonstrated a larger benefit of oral familiarity for predictable than for unpredictable items. These findings indicate that children form initial orthographic expectations about spoken words before first seeing them in print. A video abstract of this article can be viewed at: https://youtu.be/jvpJwpKMM3E. © 2017 John Wiley & Sons Ltd.

  11. Predictors of spoken language development following pediatric cochlear implantation.

    Science.gov (United States)

    Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid

    2012-01-01

    Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple

  12. Predictors of Spoken Language Development Following Pediatric Cochlear Implantation

    NARCIS (Netherlands)

    Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters

    2012-01-01

    Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to

  13. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  14. Identification of four class emotion from Indonesian spoken language using acoustic and lexical features

    Science.gov (United States)

    Kasyidi, Fatan; Puji Lestari, Dessi

    2018-03-01

    One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.

  15. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  16. Distance delivery of a spoken language intervention for school-aged and adolescent boys with fragile X syndrome.

    Science.gov (United States)

    McDuffie, Andrea; Banasik, Amy; Bullard, Lauren; Nelson, Sarah; Feigles, Robyn Tempero; Hagerman, Randi; Abbeduto, Leonard

    2018-01-01

    A small randomized group design (N = 20) was used to examine a parent-implemented intervention designed to improve the spoken language skills of school-aged and adolescent boys with FXS, the leading cause of inherited intellectual disability. The intervention was implemented by speech-language pathologists who used distance video-teleconferencing to deliver the intervention. The intervention taught mothers to use a set of language facilitation strategies while interacting with their children in the context of shared story-telling. Treatment group mothers significantly improved their use of the targeted intervention strategies. Children in the treatment group increased the duration of engagement in the shared story-telling activity as well as use of utterances that maintained the topic of the story. Children also showed increases in lexical diversity, but not in grammatical complexity.

  17. Sentence Recognition in Quiet and Noise by Pediatric Cochlear Implant Users: Relationships to Spoken Language.

    Science.gov (United States)

    Eisenberg, Laurie S; Fisher, Laurel M; Johnson, Karen C; Ganguly, Dianne Hammes; Grace, Thelma; Niparko, John K

    2016-02-01

    We investigated associations between sentence recognition and spoken language for children with cochlear implants (CI) enrolled in the Childhood Development after Cochlear Implantation (CDaCI) study. In a prospective longitudinal study, sentence recognition percent-correct scores and language standard scores were correlated at 48-, 60-, and 72-months post-CI activation. Six tertiary CI centers in the United States. Children with CIs participating in the CDaCI study. Cochlear implantation. Sentence recognition was assessed using the Hearing In Noise Test for Children (HINT-C) in quiet and at +10, +5, and 0 dB signal-to-noise ratio (S/N). Spoken language was assessed using the Clinical Assessment of Spoken Language (CASL) core composite and the antonyms, paragraph comprehension (syntax comprehension), syntax construction (expression), and pragmatic judgment tests. Positive linear relationships were found between CASL scores and HINT-C sentence scores when the sentences were delivered in quiet and at +10 and +5 dB S/N, but not at 0 dB S/N. At 48 months post-CI, sentence scores at +10 and +5 dB S/N were most strongly associated with CASL antonyms. At 60 and 72 months, sentence recognition in noise was most strongly associated with paragraph comprehension and syntax construction. Children with CIs learn spoken language in a variety of acoustic environments. Despite the observed inconsistent performance in different listening situations and noise-challenged environments, many children with CIs are able to build lexicons and learn the rules of grammar that enable recognition of sentences.

  18. Enriching English Language Spoken Outputs of Kindergartners in Thailand

    Science.gov (United States)

    Wilang, Jeffrey Dawala; Sinwongsuwat, Kemtong

    2012-01-01

    This year is designated as Thailand's "English Speaking Year" with the aim of improving the communicative competence of Thais for the upcoming integration of the Association of Southeast Asian Nations (ASEAN) in 2015. The consistent low-level proficiency of the Thais in the English language has led to numerous curriculum revisions and…

  19. Loops of Spoken Language i Danish Broadcasting Corporation News

    DEFF Research Database (Denmark)

    le Fevre Jakobsen, Bjarne

    2012-01-01

    with well-edited material, in 1965, to an anchor who hands over to journalists in live feeds from all over the world via satellite, Skype, or mobile telephone, in 2011. The narrative rhythm is faster and sometimes more spontaneous. In this article we will discuss aspects of the use of language and the tempo...

  20. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    Science.gov (United States)

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  2. Satisfaction with telemedicine for teaching listening and spoken language to children with hearing loss.

    Science.gov (United States)

    Constantinescu, Gabriella

    2012-07-01

    Auditory-Verbal Therapy (AVT) is an effective early intervention for children with hearing loss. The Hear and Say Centre in Brisbane offers AVT sessions to families soon after diagnosis, and about 20% of the families in Queensland participate via PC-based videoconferencing (Skype). Parent and therapist satisfaction with the telemedicine sessions was examined by questionnaire. All families had been enrolled in the telemedicine AVT programme for at least six months. Their average distance from the Hear and Say Centre was 600 km. Questionnaires were completed by 13 of the 17 parents and all five therapists. Parents and therapists generally expressed high satisfaction in the majority of the sections of the questionnaire, e.g. most rated the audio and video quality as good or excellent. All parents felt comfortable or as comfortable as face-to-face when discussing matters with the therapist online, and were satisfied or as satisfied as face-to-face with their level and their child's level of interaction/rapport with the therapist. All therapists were satisfied or very satisfied with the telemedicine AVT programme. The results demonstrate the potential of telemedicine service delivery for teaching listening and spoken language to children with hearing loss in rural and remote areas of Australia.

  3. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    Science.gov (United States)

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  4. Medical practices display power law behaviors similar to spoken languages.

    Science.gov (United States)

    Paladino, Jonathan D; Crooke, Philip S; Brackney, Christopher R; Kaynar, A Murat; Hotchkiss, John R

    2013-09-04

    Medical care commonly involves the apprehension of complex patterns of patient derangements to which the practitioner responds with patterns of interventions, as opposed to single therapeutic maneuvers. This complexity renders the objective assessment of practice patterns using conventional statistical approaches difficult. Combinatorial approaches drawn from symbolic dynamics are used to encode the observed patterns of patient derangement and associated practitioner response patterns as sequences of symbols. Concatenating each patient derangement symbol with the contemporaneous practitioner response symbol creates "words" encoding the simultaneous patient derangement and provider response patterns and yields an observed vocabulary with quantifiable statistical characteristics. A fundamental observation in many natural languages is the existence of a power law relationship between the rank order of word usage and the absolute frequency with which particular words are uttered. We show that population level patterns of patient derangement: practitioner intervention word usage in two entirely unrelated domains of medical care display power law relationships similar to those of natural languages, and that-in one of these domains-power law behavior at the population level reflects power law behavior at the level of individual practitioners. Our results suggest that patterns of medical care can be approached using quantitative linguistic techniques, a finding that has implications for the assessment of expertise, machine learning identification of optimal practices, and construction of bedside decision support tools.

  5. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  6. The Attitudes and Motivation of Children towards Learning Rarely Spoken Foreign Languages: A Case Study from Saudi Arabia

    Science.gov (United States)

    Al-Nofaie, Haifa

    2018-01-01

    This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…

  7. "They never realized that, you know": linguistic collocation and interactional functions of you know in contemporary academin spoken english

    Directory of Open Access Journals (Sweden)

    Rodrigo Borba

    2012-12-01

    Full Text Available Discourse markers are a collection of one-word or multiword terms that help language users organize their utterances on the grammar, semantic, pragmatic and interactional levels. Researchers have characterized some of their roles in written and spoken discourse (Halliday & Hasan, 1976, Schffrin, 1988, 2001. Following this trend, this paper advances a discussion of discourse markers in contemporary academic spoken English. Through quantitative and qualitative analyses of the use of the discourse marker ‘you know’ in the Michigan Corpus of Academic Spoken English (MICASE we describe its frequency in this corpus, its collocation on the sentence level and its interactional functions. Grammatically, a concordance analysis shows that you know (as other discourse markers is linguistically fl exible as it seems to be placed in any grammatical slot of an utterance. Interactionally, a qualitative analysis indicates that its use in contemporary English goes beyond the uses described in the literature. We defend that besides serving as a hedging strategy (Lakoff, 1975, you know also serves as a powerful face-saving (Goffman, 1955 technique which constructs students’ identities vis-à-vis their professors’ and vice-versa.

  8. Endowing Spoken Language Dialogue System with Emotional Intelligence

    DEFF Research Database (Denmark)

    André, Elisabeth; Rehm, Matthias; Minker, Wolfgang

    2004-01-01

    While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic variations in order to improve the user’s perception of the interaction. To accomplish this goal, our approach integrates a social theory of polite...... of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available....

  9. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    Science.gov (United States)

    Feenaughty, Lynda

    judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally

  10. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  11. Machine Translation Projects for Portuguese at INESC ID's Spoken Language Systems Laboratory

    Directory of Open Access Journals (Sweden)

    Anabela Barreiro

    2014-12-01

    Full Text Available Language technologies, in particular machine translation applications, have the potential to help break down linguistic and cultural barriers, presenting an important contribution to the globalization and internationalization of the Portuguese language, by allowing content to be shared 'from' and 'to' this language. This article aims to present the research work developed at the Laboratory of Spoken Language Systems of INESC-ID in the field of machine translation, namely the automated speech translation, the translation of microblogs and the creation of a hybrid machine translation system. We will focus on the creation of the hybrid system, which aims at combining linguistic knowledge, in particular semantico-syntactic knowledge, with statistical knowledge, to increase the level of translation quality.

  12. Spoken language development in oral preschool children with permanent childhood deafness.

    Science.gov (United States)

    Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.

  13. Brain Basis of Phonological Awareness for Spoken Language in Children and Its Disruption in Dyslexia

    Science.gov (United States)

    Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.

    2012-01-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783

  14. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  15. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    Directory of Open Access Journals (Sweden)

    Sarah eHirschmüller

    2016-01-01

    Full Text Available How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a positive emotion word usage base rates in spoken and written materials and (b positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  16. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    Science.gov (United States)

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  17. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  18. Auditory-verbal therapy for promoting spoken language development in children with permanent hearing impairments.

    Science.gov (United States)

    Brennan-Jones, Christopher G; White, Jo; Rush, Robert W; Law, James

    2014-03-12

    Congenital or early-acquired hearing impairment poses a major barrier to the development of spoken language and communication. Early detection and effective (re)habilitative interventions are essential for parents and families who wish their children to achieve age-appropriate spoken language. Auditory-verbal therapy (AVT) is a (re)habilitative approach aimed at children with hearing impairments. AVT comprises intensive early intervention therapy sessions with a focus on audition, technological management and involvement of the child's caregivers in therapy sessions; it is typically the only therapy approach used to specifically promote avoidance or exclusion of non-auditory facial communication. The primary goal of AVT is to achieve age-appropriate spoken language and for this to be used as the primary or sole method of communication. AVT programmes are expanding throughout the world; however, little evidence can be found on the effectiveness of the intervention. To assess the effectiveness of auditory-verbal therapy (AVT) in developing receptive and expressive spoken language in children who are hearing impaired. CENTRAL, MEDLINE, EMBASE, PsycINFO, CINAHL, speechBITE and eight other databases were searched in March 2013. We also searched two trials registers and three theses repositories, checked reference lists and contacted study authors to identify additional studies. The review considered prospective randomised controlled trials (RCTs) and quasi-randomised studies of children (birth to 18 years) with a significant (≥ 40 dBHL) permanent (congenital or early-acquired) hearing impairment, undergoing a programme of auditory-verbal therapy, administered by a certified auditory-verbal therapist for a period of at least six months. Comparison groups considered for inclusion were waiting list and treatment as usual controls. Two review authors independently assessed titles and abstracts identified from the searches and obtained full-text versions of all potentially

  19. A Spoken Language Intervention for School-Aged Boys with fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2015-01-01

    Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214

  20. Foreign language interactive didactics

    Directory of Open Access Journals (Sweden)

    Arnaldo Moisés Gómez

    2016-06-01

    Full Text Available Foreign Language Interactive Didactics is intended for foreign language teachers and would-be teachers since it is an interpretation of the foreign language teaching-learning process is conceived from a reflexive social interaction. This interpretation declares learning based on interactive tasks that provide learners with opportunities to interact meaningfully among them, as a way to develop interactional competence as objective in itself and as a means to obtain communicative competence. Foreign language interactive didactics claims for the unity of reflection and action while learning the language system and using it to communicate, by means of solving problems presented in interactive tasks. It proposes a kind of teaching that is interactive, developmental, collaborative, holist, cognitive, problematizing, reflexive, student centered, humanist, and with a strong affective component that empower the influencing psychological factors in learning. This conception appears in the book: DIDÁCTICA INTERACTIVA DE LENGUAS (2007 y 2010. The book is used as a textbook for the subject of Didactics that is part of the curriculum in language teachers’ formation of all the pedagogical sciences universities, in Spanish teachers’ formation who are not Spanish speaking people at Havana University, and also as a reference book for postgraduate courses, master’s and doctorate’ s degrees.

  1. Scenario-Based Spoken Interaction with Virtual Agents

    Science.gov (United States)

    Morton, Hazel; Jack, Mervyn A.

    2005-01-01

    This paper describes a CALL approach which integrates software for speaker independent continuous speech recognition with embodied virtual agents and virtual worlds to create an immersive environment in which learners can converse in the target language in contextualised scenarios. The result is a self-access learning package: SPELL (Spoken…

  2. Spoken Lebanese.

    Science.gov (United States)

    Feghali, Maksoud N.

    This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…

  3. The relation of the number of languages spoken to performance in different cognitive abilities in old age.

    Science.gov (United States)

    Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias

    2016-12-01

    Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.

  4. Influence of Spoken Language on the Initial Acquisition of Reading/Writing: Critical Analysis of Verbal Deficit Theory

    Science.gov (United States)

    Ramos-Sanchez, Jose Luis; Cuadrado-Gordillo, Isabel

    2004-01-01

    This article presents the results of a quasi-experimental study of whether there exists a causal relationship between spoken language and the initial learning of reading/writing. The subjects were two matched samples each of 24 preschool pupils (boys and girls), controlling for certain relevant external variables. It was found that there was no…

  5. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  6. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception.

    Science.gov (United States)

    Liebenthal, Einat; Silbersweig, David A; Stern, Emily

    2016-01-01

    Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.

  7. Semantic Relations Cause Interference in Spoken Language Comprehension When Using Repeated Definite References, Not Pronouns.

    Science.gov (United States)

    Peters, Sara A; Boiteau, Timothy W; Almor, Amit

    2016-01-01

    The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis.

  8. About Development and Innovation of the Slovak Spoken Language Dialogue System

    Directory of Open Access Journals (Sweden)

    Jozef Juhár

    2009-05-01

    Full Text Available The research and development of the Slovak spoken language dialogue system (SLDS is described in the paper. The dialogue system is based on the DARPA Communicator architecture and was developed in the period from July 2003 to June 2006. It consists of the Galaxy hub and telephony, automatic speech recognition, text-to-speech, backend, transport and VoiceXML dialogue management and automatic evaluation modules. The dialogue system is demonstrated and tested via two pilot applications, „Weather Forecast“ and „Public Transport Timetables“. The required information is retrieved from Internet resources in multi-user mode through PSTN, ISDN, GSM and/or VoIP network. Some innovation development has been performed since 2006 which is also described in the paper.

  9. Grammatical awareness in the spoken and written language of language-disabled children.

    Science.gov (United States)

    Rubin, H; Kantor, M; Macnab, J

    1990-12-01

    Experiments examined grammatical judgement, and error-identification deficits in relation to expressive language skills and to morphemic errors in writing. Language-disabled subjects did not differ from language-matched controls on judgement, revision, or error identification. Age-matched controls represented more morphemes in elicited writing than either of the other groups, which were equivalent. However, in spontaneous writing, language-disabled subjects made more frequent morphemic errors than age-matched controls, but language-matched subjects did not differ from either group. Proficiency relative to academic experience and oral language status and to remedial implications are discussed.

  10. Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study.

    Science.gov (United States)

    Werfel, Krystal L

    2017-10-05

    The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

  11. Foreign body aspiration and language spoken at home: 10-year review.

    Science.gov (United States)

    Choroomi, S; Curotta, J

    2011-07-01

    To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.

  12. A common neural system is activated in hearing non-signers to process French sign language and spoken French.

    Science.gov (United States)

    Courtin, Cyril; Jobard, Gael; Vigneau, Mathieu; Beaucousin, Virginie; Razafimandimby, Annick; Hervé, Pierre-Yves; Mellet, Emmanuel; Zago, Laure; Petit, Laurent; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie

    2011-01-15

    We used functional magnetic resonance imaging to investigate the areas activated by signed narratives in non-signing subjects naïve to sign language (SL) and compared it to the activation obtained when hearing speech in their mother tongue. A subset of left hemisphere (LH) language areas activated when participants watched an audio-visual narrative in their mother tongue was activated when they observed a signed narrative. The inferior frontal (IFG) and precentral (Prec) gyri, the posterior parts of the planum temporale (pPT) and of the superior temporal sulcus (pSTS), and the occipito-temporal junction (OTJ) were activated by both languages. The activity of these regions was not related to the presence of communicative intent because no such changes were observed when the non-signers watched a muted video of a spoken narrative. Recruitment was also not triggered by the linguistic structure of SL, because the areas, except pPT, were not activated when subjects listened to an unknown spoken language. The comparison of brain reactivity for spoken and sign languages shows that SL has a special status in the brain compared to speech; in contrast to unknown oral language, the neural correlates of SL overlap LH speech comprehension areas in non-signers. These results support the idea that strong relationships exist between areas involved in human action observation and language, suggesting that the observation of hand gestures have shaped the lexico-semantic language areas as proposed by the motor theory of speech. As a whole, the present results support the theory of a gestural origin of language. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Verbal short-term memory development and spoken language outcomes in deaf children with cochlear implants.

    Science.gov (United States)

    Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B

    2013-01-01

    Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures

  14. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    Science.gov (United States)

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  15. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    Science.gov (United States)

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  16. Children’s recall of words spoken in their first and second language:Effects of signal-to-noise ratio and reverberation time

    Directory of Open Access Journals (Sweden)

    Anders eHurtig

    2016-01-01

    Full Text Available Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1 and second-language (L2. A total of 72 children (10 years old participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 sec crossed with two different SNRs (+3 dBA and +12 dBA. Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA improved recall compared to a low SNR (+3 dBA. Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time did not interact with language.

  17. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates.

    Science.gov (United States)

    Petkov, Christopher I; Jarvis, Erich D

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.

  18. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates

    Science.gov (United States)

    Petkov, Christopher I.; Jarvis, Erich D.

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species. PMID:22912615

  19. General language performance measures in spoken and written narrative and expository discourse of school-age children with language learning disabilities.

    Science.gov (United States)

    Scott, C M; Windsor, J

    2000-04-01

    Language performance in naturalistic contexts can be characterized by general measures of productivity, fluency, lexical diversity, and grammatical complexity and accuracy. The use of such measures as indices of language impairment in older children is open to questions of method and interpretation. This study evaluated the extent to which 10 general language performance measures (GLPM) differentiated school-age children with language learning disabilities (LLD) from chronological-age (CA) and language-age (LA) peers. Children produced both spoken and written summaries of two educational videotapes that provided models of either narrative or expository (informational) discourse. Productivity measures, including total T-units, total words, and words per minute, were significantly lower for children with LLD than for CA children. Fluency (percent T-units with mazes) and lexical diversity (number of different words) measures were similar for all children. Grammatical complexity as measured by words per T-unit was significantly lower for LLD children. However, there was no difference among groups for clauses per T-unit. The only measure that distinguished children with LLD from both CA and LA peers was the extent of grammatical error. Effects of discourse genre and modality were consistent across groups. Compared to narratives, expository summaries were shorter, less fluent (spoken versions), more complex (words per T-unit), and more error prone. Written summaries were shorter and had more errors than spoken versions. For many LLD and LA children, expository writing was exceedingly difficult. Implications for accounts of language impairment in older children are discussed.

  20. THE INFLUENCE OF LANGUAGE USE AND LANGUAGE ATTITUDE ON THE MAINTENANCE OF COMMUNITY LANGUAGES SPOKEN BY MIGRANT STUDENTS

    Directory of Open Access Journals (Sweden)

    Leni Amalia Suek

    2014-05-01

    Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.

  1. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals.

    Science.gov (United States)

    Marchman, Virginia A; Fernald, Anne; Hurtado, Nereyda

    2010-09-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children's facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children's ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language.

  2. Oral narrative context effects on poor readers' spoken language performance: story retelling, story generation, and personal narratives.

    Science.gov (United States)

    Westerveld, Marleen F; Gillon, Gail T

    2010-04-01

    This investigation explored the effects of oral narrative elicitation context on children's spoken language performance. Oral narratives were produced by a group of 11 children with reading disability (aged between 7;11 and 9;3) and an age-matched control group of 11 children with typical reading skills in three different contexts: story retelling, story generation, and personal narratives. In the story retelling condition, the children listened to a story on tape while looking at the pictures in a book, before being asked to retell the story without the pictures. In the story generation context, the children were shown a picture containing a scene and were asked to make up their own story. Personal narratives were elicited with the help of photos and short narrative prompts. The transcripts were analysed at microstructure level on measures of verbal productivity, semantic diversity, and morphosyntax. Consistent with previous research, the results revealed no significant interactions between group and context, indicating that the two groups of children responded to the type of elicitation context in a similar way. There was a significant group effect, however, with the typical readers showing better performance overall on measures of morphosyntax and semantic diversity. There was also a significant effect of elicitation context with both groups of children producing the longest, linguistically most dense language samples in the story retelling context. Finally, the most significant differences in group performance were observed in the story retelling condition, with the typical readers outperforming the poor readers on measures of verbal productivity, number of different words, and percent complex sentences. The results from this study confirm that oral narrative samples can distinguish between good and poor readers and that the story retelling condition may be a particularly useful context for identifying strengths and weaknesses in oral narrative performance.

  3. Usable, Real-Time, Interactive Spoken Language Systems

    Science.gov (United States)

    1992-09-01

    Workshop at Arden House, February 23-26,1992. Francis Kubala , et al, "BBN BYBLOS and HARC February 1992 ATIS Benchmark Results", 5th DARPA Speech...8217, presented at ICASSP, 1992. Richard Schwartz, Steve Austin, Francis Kubala , John Makhoul, Long Nguyen, Paul Placeway; George Zavaliagkos, Northeastern...of the DARPA Common Lexicon Working Group at the 5th DARPA Speech & NL Workshop at Arden House, February 23-26,1992. Francis Kubala is chairing the

  4. The role of planum temporale in processing accent variation in spoken language comprehension.

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a

  5. EEG decoding of spoken words in bilingual listeners: from words to language invariant semantic-conceptual representations

    Directory of Open Access Journals (Sweden)

    João Mendonça Correia

    2015-02-01

    Full Text Available Spoken word recognition and production require fast transformations between acoustic, phonological and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g. ‘paard’-‘horse’. Multivariate pattern analysis (MVPA was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination and generalize meaning across two languages (across-language generalization. Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in

  6. Reliability and validity of the C-BiLLT: a new instrument to assess comprehension of spoken language in young children with cerebral palsy and complex communication needs.

    Science.gov (United States)

    Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J

    2014-09-01

    In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.

  7. Conversational interfaces for task-oriented spoken dialogues: design aspects influencing interaction quality

    NARCIS (Netherlands)

    Niculescu, A.I.

    2011-01-01

    This dissertation focuses on the design and evaluation of speech-based conversational interfaces for task-oriented dialogues. Conversational interfaces are software programs enabling interaction with computer devices through natural language dialogue. Even though processing conversational speech is

  8. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    Science.gov (United States)

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  9. How appropriate are the English language test requirements for non-UK-trained nurses? A qualitative study of spoken communication in UK hospitals.

    Science.gov (United States)

    Sedgwick, Carole; Garner, Mark

    2017-06-01

    Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical

  10. KANNADA--A CULTURAL INTRODUCTION TO THE SPOKEN STYLES OF THE LANGUAGE.

    Science.gov (United States)

    KRISHNAMURTHI, M.G.; MCCORMACK, WILLIAM

    THE TWENTY GRADED UNITS IN THIS TEXT CONSTITUTE AN INTRODUCTION TO BOTH INFORMAL AND FORMAL SPOKEN KANNADA. THE FIRST TWO UNITS PRESENT THE KANNADA MATERIAL IN PHONETIC TRANSCRIPTION ONLY, WITH KANNADA SCRIPT GRADUALLY INTRODUCED FROM UNIT III ON. A TYPICAL LESSON-UNIT INCLUDES--(1) A DIALOG IN PHONETIC TRANSCRIPTION AND ENGLISH TRANSLATION, (2)…

  11. The role of planum temporale in processing accent variation in spoken language comprehension

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and

  12. Chunk Learning and the Development of Spoken Discourse in a Japanese as a Foreign Language Classroom

    Science.gov (United States)

    Taguchi, Naoko

    2007-01-01

    This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…

  13. Brain-based translation: fMRI decoding of spoken words in bilinguals reveals language-independent semantic representations in anterior temporal lobe.

    Science.gov (United States)

    Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene

    2014-01-01

    Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.

  14. Interactive Language Learning through Speech-Enabled Virtual Scenarios

    Directory of Open Access Journals (Sweden)

    Hazel Morton

    2012-01-01

    Full Text Available This paper describes the evaluation of an educational game designed to give learners of foreign languages the opportunity to practice their spoken language skills. Within the speech interactive Computer-Assisted Language Learning (CALL program, scenarios are presented in which learners interact with virtual characters in the target language using speech recognition technology. Two types of interactive scenarios with virtual characters are presented as part of the game: the one-to-one scenarios which take the form of practice question and answer scenarios where the learner interacts with one virtual character and the interactive scenario which is an immersive contextualised scenario where the learner interacts with two or more virtual characters within the scene to complete a (task-based communicative goal. The study presented here compares learners’ subjective attitudes towards the different scenarios. In addition, the study investigates the performance of the speech recognition component in this game. Forty-eight students of English as a Foreign Language (EFL took part in the evaluation. Results indicate that learners’ subjective ratings for the contextualised interactive scenario are higher than for the one-to-one, practice scenarios. In addition, recognition performance was better for these interactive scenarios.

  15. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  16. Comprehension of spoken language in non-speaking children with severe cerebral palsy: an explorative study on associations with motor type and disabilities

    NARCIS (Netherlands)

    Geytenbeek, J.J.M.; Vermeulen, R.J.; Becher, J.G.; Oostrom, K.J.

    2015-01-01

    Aim: To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Method: Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic

  17. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    Science.gov (United States)

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  18. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    Science.gov (United States)

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  19. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  20. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  1. Learning to talk the talk and walk the walk: Interactional competence in academic spoken English

    Directory of Open Access Journals (Sweden)

    Richard F. Young

    2013-04-01

    Full Text Available In this article I present the theory of interactional competence and contrast it with alternative ways of describing a learner’s knowledge of language. The focus of interactional competence is the structure of recurring episodes of face-to-face interaction, episodes that are of social and cultural significance to a community of speakers. Such episodes I call discursive practices, and I argue that participants co-construct a discursive practice through an architecture of interactional resources that is specific to the practice. The resources include rhetorical script, the register of the practice, the turn-taking system, management of topics, the participation framework, and means for signalling boundaries and transitions. I exemplify the theory of interactional competence and the architecture of discursive practice by examining two instances of the same practice: office hours between teaching assistants and undergraduate students at an American university, one in Mathematics, one in Italian as a foreign language. By a close comparison of the interactional resources that participants bring to the two instances, I argue that knowledge and interactional skill are local and practice-specific, and that the joint construction of discursive practice involves participants making use of the resources that they have acquired in previous instances of the same practice.

  2. Using the readiness potential of button-press and verbal response within spoken language processing.

    Science.gov (United States)

    Jansen, Stefanie; Wesselmeier, Hendrik; de Ruiter, Jan P; Mueller, Horst M

    2014-07-30

    Even though research in turn-taking in spoken dialogues is now abundant, a typical EEG-signature associated with the anticipation of turn-ends has not yet been identified until now. The purpose of this study was to examine if readiness potentials (RP) can be used to study the anticipation of turn-ends by using it in a motoric finger movement and articulatory movement task. The goal was to determine the preconscious onset of turn-end anticipation in early, preconscious turn-end anticipation processes by the simultaneous registration of EEG measures (RP) and behavioural measures (anticipation timing accuracy, ATA). For our behavioural measures, we used both button-press and verbal response ("yes"). In the experiment, 30 subjects were asked to listen to auditorily presented utterances and press a button or utter a brief verbal response when they expected the end of the turn. During the task, a 32-channel-EEG signal was recorded. The results showed that the RPs during verbal- and button-press-responses developed similarly and had an almost identical time course: the RP signals started to develop 1170 vs. 1190 ms before the behavioural responses. Until now, turn-end anticipation is usually studied using behavioural methods, for instance by measuring the anticipation timing accuracy, which is a measurement that reflects conscious behavioural processes and is insensitive to preconscious anticipation processes. The similar time course of the recorded RP signals for both verbal- and button-press responses provide evidence for the validity of using RPs as an online marker for response preparation in turn-taking and spoken dialogue research. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Social Interaction Affects Neural Outcomes of Sign Language Learning As a Foreign Language in Adults.

    Science.gov (United States)

    Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta

    2017-01-01

    Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.

  4. Quarterly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from fiscal...

  5. Yearly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2016 Onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from federal...

  6. Shy and Soft-Spoken: Shyness, Pragmatic Language, and Socioemotional Adjustment in Early Childhood

    Science.gov (United States)

    Coplan, Robert J.; Weeks, Murray

    2009-01-01

    The goal of this study was to examine the moderating role of pragmatic language in the relations between shyness and indices of socio-emotional adjustment in an unselected sample of early elementary school children. In particular, we sought to explore whether pragmatic language played a protective role for shy children. Participants were n = 167…

  7. Propositional Density in Spoken and Written Language of Czech-Speaking Patients with Mild Cognitive Impairment

    Science.gov (United States)

    Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán

    2016-01-01

    Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…

  8. How do doctors learn the spoken language of their patients? | Pfaff ...

    African Journals Online (AJOL)

    Methods. Qualitative individual interviews were conducted with seven doctors who had successfully learned the language of their patients, to determine their experiences and how they had succeeded. Results. All seven doctors used a combination of methods to learn the language. Listening was found to be very important, ...

  9. Task-Oriented Spoken Dialog System for Second-Language Learning

    Science.gov (United States)

    Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…

  10. Bilateral Versus Unilateral Cochlear Implants in Children: A Study of Spoken Language Outcomes

    Science.gov (United States)

    Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of

  11. Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

    Science.gov (United States)

    Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R

    This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant

  12. Yearly Data for Spoken Language Preferences of Supplemental Security Income (Blind & Disabled) (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...

  13. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Aged Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits for fiscal years 2014 -...

  14. Quarterly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for fiscal years 2014...

  15. Yearly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...

  16. Yearly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits from federal fiscal year...

  17. Inter Lingual Influences of Turkish, Serbian and English Dialect in Spoken Gjakovar's Language

    OpenAIRE

    Sindorela Doli Kryeziu; Gentiana Muhaxhiri

    2014-01-01

    In this paper we have tried to clarify the problems that are faced "gege dialect's'' speakers in Gjakova who have presented more or less difficulties in acquiring the standard. Standard language is part of the people language, but increased to the norm according the scientific criteria. From this observation it comes obliviously understandable that standard variation and dialectal variant are inseparable and, as such, they represent a macro linguistic unity. As part of this macro linguistic u...

  18. Project ASPIRE: Spoken Language Intervention Curriculum for Parents of Low-socioeconomic Status and Their Deaf and Hard-of-Hearing Children.

    Science.gov (United States)

    Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen

    2016-02-01

    To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.

  19. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    Science.gov (United States)

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  20. SPOKEN CORPORA: RATIONALE AND APPLICATION

    Directory of Open Access Journals (Sweden)

    John Newman

    2008-12-01

    Full Text Available Despite the abundance of electronic corpora now available to researchers, corpora of natural speech are still relatively rare and relatively costly. This paper suggests reasons why spoken corpora are needed, despite the formidable problems of construction. The multiple purposes of such corpora and the involvement of very different kinds of language communities in such projects mean that there is no one single blueprint for the design, markup, and distribution of spoken corpora. A number of different spoken corpora are reviewed to illustrate a range of possibilities for the construction of spoken corpora.

  1. Cross-Sensory Correspondences and Symbolism in Spoken and Written Language

    Science.gov (United States)

    Walker, Peter

    2016-01-01

    Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…

  2. Changes to English as an Additional Language Writers' Research Articles: From Spoken to Written Register

    Science.gov (United States)

    Koyalan, Aylin; Mumford, Simon

    2011-01-01

    The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…

  3. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    Science.gov (United States)

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  4. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    Science.gov (United States)

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  5. Primary Spoken Language and Neuraxial Labor Analgesia Use Among Hispanic Medicaid Recipients.

    Science.gov (United States)

    Toledo, Paloma; Eosakul, Stanley T; Grobman, William A; Feinglass, Joe; Hasnain-Wynia, Romana

    2016-01-01

    Hispanic women are less likely than non-Hispanic Caucasian women to use neuraxial labor analgesia. It is unknown whether there is a disparity in anticipated or actual use of neuraxial labor analgesia among Hispanic women based on primary language (English versus Spanish). In this 3-year retrospective, single-institution, cross-sectional study, we extracted electronic medical record data on Hispanic nulliparous with vaginal deliveries who were insured by Medicaid. On admission, patients self-identified their primary language and anticipated analgesic use for labor. Extracted data included age, marital status, labor type, delivery provider (obstetrician or midwife), and anticipated and actual analgesic use. Household income was estimated from census data geocoded by zip code. Multivariable logistic regression models were estimated for anticipated and actual neuraxial analgesia use. Among 932 Hispanic women, 182 were self-identified as primary Spanish speakers. Spanish-speaking Hispanic women were less likely to anticipate and use neuraxial anesthesia than English-speaking women. After controlling for confounders, there was an association between primary language and anticipated neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women, 0.70; 97.5% confidence interval, 0.53-0.92). Similarly, there was an association between language and neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women 0.88; 97.5% confidence interval, 0.78-0.99). The use of a midwife compared with an obstetrician also decreased the likelihood of both anticipating and using neuraxial analgesia. A language-based disparity was found in neuraxial labor analgesia use. It is possible that there are communication barriers in knowledge or understanding of analgesic options. Further research is necessary to determine the cause of this association.

  6. The interaction of lexical semantics and cohort competition in spoken word recognition: an fMRI study.

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K

    2011-12-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.

  7. Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension

    Directory of Open Access Journals (Sweden)

    Brian eRiordan

    2015-05-01

    Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.

  8. Utility of spoken dialog systems

    CSIR Research Space (South Africa)

    Barnard, E

    2008-12-01

    Full Text Available The commercial successes of spoken dialog systems in the developed world provide encouragement for their use in the developing world, where speech could play a role in the dissemination of relevant information in local languages. We investigate...

  9. Language and Cognition Interaction Neural Mechanisms

    OpenAIRE

    Perlovsky, Leonid

    2011-01-01

    How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is a...

  10. Human inferior colliculus activity relates to individual differences in spoken language learning.

    Science.gov (United States)

    Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M

    2012-03-01

    A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.

  11. Examination of validity in spoken language evaluations: Adult onset stuttering following mild traumatic brain injury.

    Science.gov (United States)

    Roth, Carole R; Cornis-Pop, Micaela; Beach, Woodford A

    2015-01-01

    Reports of increased incidence of adult onset stuttering in veterans and service members with mild traumatic brain injury (mTBI) from combat operations in Iraq and Afghanistan lead to a reexamination of the neurogenic vs. psychogenic etiology of stuttering. This article proposes to examine the merit of the dichotomy between neurogenic and psychogenic bases of stuttering, including symptom exaggeration, for the evaluation and treatment of the disorder. Two case studies of adult onset stuttering in service members with mTBI from improvised explosive device blasts are presented in detail. Speech fluency was disrupted by abnormal pauses and speech hesitations, brief blocks, rapid repetitions, and occasional prolongations. There was also wide variability in the frequency of stuttering across topics and conversational situations. Treatment focused on reducing the frequency and severity of dysfluencies and included educational, psychological, environmental, and behavioral interventions. Stuttering characteristics as well as the absence of objective neurological findings ruled out neurogenic basis of stuttering in these two cases and pointed to psychogenic causes. However, the differential diagnosis had only limited value for developing the plan of care. The successful outcomes of the treatment serve to illustrate the complex interaction of neurological, psychological, emotional, and environmental factors of post-concussive symptoms and to underscore the notion that there are many facets to symptom presentation in post-combat health.

  12. Phonological processing of rhyme in spoken language and location in sign language by deaf and hearing participants: a neurophysiological study.

    Science.gov (United States)

    Colin, C; Zuinen, T; Bayard, C; Leybaert, J

    2013-06-01

    Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  13. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  14. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    Science.gov (United States)

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  15. How the stigma of low literacy can impair patient-professional spoken interactions and affect health: insights from a qualitative investigation.

    Science.gov (United States)

    Easton, Phyllis; Entwistle, Vikki A; Williams, Brian

    2013-08-16

    Low literacy is a significant problem across the developed world. A considerable body of research has reported associations between low literacy and less appropriate access to healthcare services, lower likelihood of self-managing health conditions well, and poorer health outcomes. There is a need to explore the previously neglected perspectives of people with low literacy to help explain how low literacy can lead to poor health, and to consider how to improve the ability of health services to meet their needs. Two stage qualitative study. In-depth individual interviews followed by focus groups to confirm analysis and develop suggestions for service improvements. A purposive sample of 29 adults with English as their first language who had sought help with literacy was recruited from an Adult Learning Centre in the UK. Over and above the well-documented difficulties that people with low literacy can have with the written information and complex explanations and instructions they encounter as they use health services, the stigma of low literacy had significant negative implications for participants' spoken interactions with healthcare professionals.Participants described various difficulties in consultations, some of which had impacted negatively on their broader healthcare experiences and abilities to self-manage health conditions. Some communication difficulties were apparently perpetuated or exacerbated because participants limited their conversational engagement and used a variety of strategies to cover up their low literacy that could send misleading signals to health professionals. Participants' biographical narratives revealed that the ways in which they managed their low literacy in healthcare settings, as in other social contexts, stemmed from highly negative experiences with literacy-related stigma, usually from their schooldays onwards. They also suggest that literacy-related stigma can significantly undermine mental wellbeing by prompting self

  16. Language evolution and human-computer interaction

    Science.gov (United States)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  17. When the Macro Facilitates the Micro: A Study of Regimentation and Emergence in Spoken Interaction

    Science.gov (United States)

    Warriner, Doris S.

    2012-01-01

    In moments of "dispersion, diaspora, and reterritorialization" (Amy Shuman 2006), the personal, the interactional, and the improvised (the "micro") cannot be separated analytically from circulating ideologies, institutional norms, or cultural flows (the "macro"). With a focus on the emergence of identities within social interaction, specifically…

  18. Language spoken at home and the association between ethnicity and doctor-patient communication in primary care: analysis of survey data for South Asian and White British patients.

    Science.gov (United States)

    Brodie, Kara; Abel, Gary; Burt, Jenni

    2016-03-03

    To investigate if language spoken at home mediates the relationship between ethnicity and doctor-patient communication for South Asian and White British patients. We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner-patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. There was strong evidence of an association between doctor-patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0-100) than White British patients (95% CI -4.9 to -1.1, p=0.002). This difference reduced to 1.4 points (95% CI -3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI -6.4 to -0.2). South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  20. Language and Cognition Interaction Neural Mechanisms

    Directory of Open Access Journals (Sweden)

    Leonid Perlovsky

    2011-01-01

    Full Text Available How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is acquired from surrounding language “ready-made” and therefore can be acquired early in life. This early acquisition of language in childhood encompasses the entire hierarchy from sounds to words, to phrases, and to highest concepts existing in culture. Cognition is developed from experience. Yet cognition cannot be acquired from experience alone; language is a necessary intermediary, a “teacher.” A mathematical model is developed; it overcomes previous difficulties and leads to a computational theory. This model is consistent with Arbib's “language prewired brain” built on top of mirror neuron system. It models recent neuroimaging data about cognition, remaining unnoticed by other theories. A number of properties of language and cognition are explained, which previously seemed mysterious, including influence of language grammar on cultural evolution, which may explain specifics of English and Arabic cultures.

  1. Language and cognition interaction neural mechanisms.

    Science.gov (United States)

    Perlovsky, Leonid

    2011-01-01

    How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is acquired from surrounding language "ready-made" and therefore can be acquired early in life. This early acquisition of language in childhood encompasses the entire hierarchy from sounds to words, to phrases, and to highest concepts existing in culture. Cognition is developed from experience. Yet cognition cannot be acquired from experience alone; language is a necessary intermediary, a "teacher." A mathematical model is developed; it overcomes previous difficulties and leads to a computational theory. This model is consistent with Arbib's "language prewired brain" built on top of mirror neuron system. It models recent neuroimaging data about cognition, remaining unnoticed by other theories. A number of properties of language and cognition are explained, which previously seemed mysterious, including influence of language grammar on cultural evolution, which may explain specifics of English and Arabic cultures.

  2. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  3. Contribution of Spoken Language and Socio-Economic Background to Adolescents' Educational Achievement at Age 16 Years

    Science.gov (United States)

    Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert

    2017-01-01

    Background: Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and…

  4. Phonological awareness development in children with and without spoken language difficulties: A 12-month longitudinal study of German-speaking pre-school children.

    Science.gov (United States)

    Schaefer, Blanca; Stackhouse, Joy; Wells, Bill

    2017-10-01

    There is strong empirical evidence that English-speaking children with spoken language difficulties (SLD) often have phonological awareness (PA) deficits. The aim of this study was to explore longitudinally if this is also true of pre-school children speaking German, a language that makes extensive use of derivational morphemes which may impact on the acquisition of different PA levels. Thirty 4-year-old children with SLD were assessed on 11 PA subtests at three points over a 12-month period and compared with 97 four-year-old typically developing (TD) children. The TD-group had a mean percentage correct of over 50% for the majority of tasks (including phoneme tasks) and their PA skills developed significantly over time. In contrast, the SLD-group improved their PA performance over time on syllable and rhyme, but not on phoneme level tasks. Group comparisons revealed that children with SLD had weaker PA skills, particularly on phoneme level tasks. The study contributes a longitudinal perspective on PA development before school entry. In line with their English-speaking peers, German-speaking children with SLD showed poorer PA skills than TD peers, indicating that the relationship between SLD and PA is similar across these two related but different languages.

  5. Distinguish Spoken English from Written English: Rich Feature Analysis

    Science.gov (United States)

    Tian, Xiufeng

    2013-01-01

    This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…

  6. Domain Specific Languages for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus

    This dissertation shows how domain specific languages may be applied to the domain of interactive Web services to obtain flexible, safe, and efficient solutions. We show how each of four key aspects of interactive Web services involving sessions, dynamic creation of HTML/XML documents, form field......, , that supports virtually all aspects of the development of interactive Web services and provides flexible, safe, and efficient solutions....

  7. Speech and Language Interaction in a Web Theatre Environment

    NARCIS (Netherlands)

    Nijholt, Antinus; Dalsgaard, P.; Hulstijn, J.; Lee, C.H.; Heisterkamp, P.; van Hessen, Adrianus J.; Cole, R.

    1999-01-01

    We discuss research on interaction in a virtual theatre that can be accessed through Web pages. In the environment we employ several agents. The virtual theatre allows navigation through keyboard and mouse, but there is also a navigation agent which listens to typed input and spoken commands. We

  8. Interactive System for Polish Signed Language Learning

    Directory of Open Access Journals (Sweden)

    Karolina Olga Nurzyńska

    2006-07-01

    Full Text Available The aim of this study is to present an overview about computer singed language course with module for automatic signed language recognition as a part of language acquisition test. The idea to create an interactive sign language learning system seems to be a new one. We hope that this solution helps to overcome the barrier between the silent and hearing world. On the other hand, we concentrate our efforts to create a system for a home use that will not need any sophisticated hardware. Moreover, we put pressure on utilization of already proposed and popular description scheme. The MPEG-7 standard formally called the Multimedia Content Description Interface has been chosen. This standard provides a rich set of tools for complete multimedia content description. The most important application for sign language is the possibilities to describe static and dynamic features of objects in image sequences both. This description schema gives the opportunity to create description of signing person on required level of granularity. In the article a brief description of many suggested solutions for semiautomatic or automatic sign language recognition systems is given. Besides, there are described some implemented learning application which aim was to learn sign languages. The main groups, which could be distinguished are: animated avatars observation, messenger for deaf people, testing progress in learning sign languages by using education platforms.

  9. Reply to David Kemmerer's "a critique of Mark D. Allen's 'the preservation of verb subcategory knowledge in a spoken language comprehension deficit'".

    Science.gov (United States)

    Allen, Mark D; Owens, Tyler E

    2008-07-01

    Allen [Allen, M. D. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. Brain and Language, 95, 255-264] presents evidence from a single patient, WBN, to motivate a theory of lexical processing and representation in which syntactic information may be encoded and retrieved independently of semantic information. In his critique, Kemmerer argues that because Allen depended entirely on preposition-based verb subcategory violations to test WBN's knowledge of correct argument structure, his results, at best, address a "strawman" theory. This argument rests on the assumption that preposition subcategory options are superficial syntactic phenomena which are not represented by argument structure proper. We demonstrate that preposition subcategory is in fact treated as semantically determined argument structure in the theories that Allen evaluated, and thus far from irrelevant. In further discussion of grammatically relevant versus irrelevant semantic features, Kemmerer offers a review of his own studies. However, due to an important design shortcoming in these experiments, we remain unconvinced. Reemphasizing the fact the Allen (2005) never claimed to rule out all semantic contributions to syntax, we propose an improvement in Kemmerer's approach that might provide more satisfactory evidence on the distinction between the kinds of relevant versus irrelevant features his studies have addressed.

  10. Age and amount of exposure to a foreign language during childhood: behavioral and ERP data on the semantic comprehension of spoken English by Japanese children.

    Science.gov (United States)

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hoshino, Takahiro; Hagiwara, Hiroko

    2011-06-01

    Children's foreign-language (FL) learning is a matter of much social as well as scientific debate. Previous behavioral research indicates that starting language learning late in life can lead to problems in phonological processing. Inadequate phonological capacity may impede lexical learning and semantic processing (phonological bottleneck hypothesis). Using both behavioral and neuroimaging data, here we examine the effects of age of first exposure (AOFE) and total hours of exposure (HOE) to English, on 350 Japanese primary-school children's semantic processing of spoken English. Children's English proficiency scores and N400 event-related brain potentials (ERPs) were analyzed in multiple regression analyses. The results showed (1) that later, rather than earlier, AOFE led to higher English proficiency and larger N400 amplitudes, when HOE was controlled for; and (2) that longer HOE led to higher English proficiency and larger N400 amplitudes, whether AOFE was controlled for or not. These data highlight the important role of amount of exposure in FL learning, and cast doubt on the view that starting FL learning earlier always produces better results. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  11. Investigating L2 Spoken English through the Role Play Learner Corpus

    Science.gov (United States)

    Nava, Andrea; Pedrazzini, Luciana

    2011-01-01

    We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…

  12. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  13. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits for fiscal...

  14. Social Security Administration - Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from fiscal...

  15. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  16. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  17. Fast mapping semantic features: performance of adults with normal language, history of disorders of spoken and written language, and attention deficit hyperactivity disorder on a word-learning task.

    Science.gov (United States)

    Alt, Mary; Gutmann, Michelle L

    2009-01-01

    This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.

  18. Web-based mini-games for language learning that support spoken interaction

    CSIR Research Space (South Africa)

    Strik, H

    2015-09-01

    Full Text Available proficiency in Dutch, French, and English through web-based mini-games. These mini-games were tested in four countries: The Netherlands (Dutch), Belgium (French), United Kingdom and South-Africa (English). Four types of mini-games were developed, and in two...

  19. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Utah State University: Cross-Discipline Training through the Graduate Studies Program in Auditory Learning & Spoken Language

    Science.gov (United States)

    Houston, K. Todd

    2010-01-01

    Since 1946, Utah State University (USU) has offered specialized coursework in audiology and speech-language pathology, awarding the first graduate degrees in 1948. In 1965, the teacher training program in deaf education was launched. Over the years, the Department of Communicative Disorders and Deaf Education (COMD-DE) has developed a rich history…

  1. A Pilot Study of Telepractice for Teaching Listening and Spoken Language to Mandarin-Speaking Children with Congenital Hearing Loss

    Science.gov (United States)

    Chen, Pei-Hua; Liu, Ting-Wei

    2017-01-01

    Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…

  2. Learning procedures from interactive natural language instructions

    Science.gov (United States)

    Huffman, Scott B.; Laird, John E.

    1994-01-01

    Despite its ubiquity in human learning, very little work has been done in artificial intelligence on agents that learn from interactive natural language instructions. In this paper, the problem of learning procedures from interactive, situated instruction is examined in which the student is attempting to perform tasks within the instructional domain, and asks for instruction when it is needed. Presented is Instructo-Soar, a system that behaves and learns in response to interactive natural language instructions. Instructo-Soar learns completely new procedures from sequences of instruction, and also learns how to extend its knowledge of previously known procedures to new situations. These learning tasks require both inductive and analytic learning. Instructo-Soar exhibits a multiple execution learning process in which initial learning has a rote, episodic flavor, and later executions allow the initially learned knowledge to be generalized properly.

  3. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  4. Inferential language use by school-aged boys with fragile X syndrome: Effects of a parent-implemented spoken language intervention.

    Science.gov (United States)

    Nelson, Sarah; McDuffie, Andrea; Banasik, Amy; Tempero Feigles, Robyn; Thurman, Angela John; Abbeduto, Leonard

    This study examined the impact of a distance-delivered parent-implemented narrative language intervention on the use of inferential language during shared storytelling by school-aged boys with fragile X syndrome, an inherited neurodevelopmental disorder. Nineteen school-aged boys with FXS and their biological mothers participated. Dyads were randomly assigned to an intervention or a treatment-as-usual comparison group. Transcripts from all pre- and post-intervention sessions were coded for child use of prompted and spontaneous inferential language coded into various categories. Children in the intervention group used more utterances that contained inferential language than the comparison group at post-intervention. Furthermore, children in the intervention group used more prompted inferential language than the comparison group at post-intervention, but there were no differences between the groups in their spontaneous use of inferential language. Additionally, children in the intervention group demonstrated increases from pre- to post-intervention in their use of most categories of inferential language. This study provides initial support for the utility of a parent-implemented language intervention for increasing the use of inferential language by school aged boys with FXS, but also suggests the need for additional treatment to encourage spontaneous use. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends

    Science.gov (United States)

    Weisberg, Jill; McCullough, Stephen; Emmorey, Karen

    2018-01-01

    Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161

  6. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  7. The expressions of spatial relations during interaction in american sign language, croatian sign language, and turkish sign language

    OpenAIRE

    Arık, Engin

    2012-01-01

    Signers use their body and the space in front of them iconically. Does iconicity lead to the same mapping strategies in construing space during interaction across sign languages? The present study addressed this question by conducting an experimental study on basic static and motion event descriptions during interaction (describer input and addressee re-signing/retelling) in American Sign Language, Croatian Sign Language, and Turkish Sign Language. I found that the three sign languages are si...

  8. Informal Language Learning Setting: Technology or Social Interaction?

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    Based on the informal language learning theory, language learning can occur outside the classroom setting unconsciously and incidentally through interaction with the native speakers or exposure to authentic language input through technology. However, an EFL context lacks the social interaction which naturally occurs in an ESL context. To explore…

  9. LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI

    Directory of Open Access Journals (Sweden)

    Süleyman Kaan YALÇIN (M.A.H.

    2008-12-01

    Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.

  10. Language Maintenance in a Multilingual Family: Informal Heritage Language Lessons in Parent-Child Interactions

    OpenAIRE

    Kheirkhah, Mina; Cekaite, Asta

    2015-01-01

    The present study explores language socialization patterns in a Persian-Kurdish family in Sweden and examines how "one-parent, one-language" family language policies are instantiated and negotiated in parent-child interactions. The data consist of video-recordings and ethnographic observations of family interactions, as well as interviews. Detailed interactional analysis is employed to investigate parental language maintenance efforts and the childs agentive orientation in relation to the rec...

  11. Language Maintenance in a Multilingual Family: Informal Heritage Language Lessons in Parent-Child Interactions

    Science.gov (United States)

    Kheirkhah, Mina; Cekaite, Asta

    2015-01-01

    The present study explores language socialization patterns in a Persian-Kurdish family in Sweden and examines how "one-parent, one-language" family language policies are instantiated and negotiated in parent-child interactions. The data consist of video-recordings and ethnographic observations of family interactions, as well as…

  12. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  13. Spoken and Written Communication: Are Five Vowels Enough?

    Science.gov (United States)

    Abbott, Gerry

    The comparatively small vowel inventory of Bantu languages leads young Bantu learners to produce "undifferentiations," so that, for example, the spoken forms of "hat,""hut,""heart" and "hurt" sound the same to a British ear. The two criteria for a non-native speaker's spoken performance are…

  14. Discourse before gender: An event-related brain potential study on the interplay of semantic and syntactic information during spoken language understanding

    NARCIS (Netherlands)

    Brown, C.M.; Berkum, J.J.A. van; Hagoort, P.

    2000-01-01

    A study is presented on the effects of discourse-semantic and lexical-syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior

  15. Emotion in languaging: languaging as affective, adaptive, and flexible behavior in social interaction

    Science.gov (United States)

    Jensen, Thomas W.

    2014-01-01

    This article argues for a view on languaging as inherently affective. Informed by recent ecological tendencies within cognitive science and distributed language studies a distinction between first order languaging (language as whole-body sense making) and second order language (language as system like constraints) is put forward. Contrary to common assumptions within linguistics and communication studies separating language-as-a-system from language use (resulting in separations between language vs. body-language and verbal vs. non-verbal communication etc.) the first/second order distinction sees language as emanating from behavior making it possible to view emotion and affect as integral parts languaging behavior. Likewise, emotion and affect are studied, not as inner mental states, but as processes of organism-environment interactions. Based on video recordings of interaction between (1) children with special needs, and (2) couple in therapy and the therapist patterns of reciprocal influences between interactants are examined. Through analyzes of affective stance and patterns of inter-affectivity it is exemplified how language and emotion should not be seen as separate phenomena combined in language use, but rather as completely intertwined phenomena in languaging behavior constrained by second order patterns. PMID:25076921

  16. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script.

    Science.gov (United States)

    Zhang, Qingfang; Wang, Cheng

    2014-01-01

    The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  17. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  18. Bilingual Parents' Modeling of Pragmatic Language Use in Multiparty Interactions

    Science.gov (United States)

    Tare, Medha; Gelman, Susan A.

    2011-01-01

    Parental input represents an important source of language socialization. Particularly in bilingual contexts, parents may model pragmatic language use and metalinguistic strategies to highlight language differences. The present study examines multiparty interactions involving 28 bilingual English- and Marathi-speaking parent-child pairs in the…

  19. Investigating Stratification, Language Diversity and Mathematics Classroom Interaction

    Science.gov (United States)

    Barwell, Richard

    2016-01-01

    Research on the socio-political dimensions of language diversity in mathematics classrooms is under-theorised and largely focuses on language choice. These dimensions are, however, likely to influence mathematics classroom interaction in many other ways than participants' choice of language. To investigate these influences, I propose that the…

  20. Making a Difference: Language Teaching for Intercultural and International Dialogue

    Science.gov (United States)

    Byram, Michael; Wagner, Manuela

    2018-01-01

    Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…

  1. Discussion Forum Interactions: Text and Context

    Science.gov (United States)

    Montero, Begona; Watts, Frances; Garcia-Carbonell, Amparo

    2007-01-01

    Computer-mediated communication (CMC) is currently used in language teaching as a bridge for the development of written and spoken skills [Kern, R., 1995. "Restructuring classroom interaction with networked computers: effects on quantity and characteristics of language production." "The Modern Language Journal" 79, 457-476]. Within CMC…

  2. Can non-interactive language input benefit young second-language learners?

    Science.gov (United States)

    Au, Terry Kit-Fong; Chan, Winnie Wailan; Cheng, Liao; Siegel, Linda S; Tso, Ricky Van Yip

    2015-03-01

    To fully acquire a language, especially its phonology, children need linguistic input from native speakers early on. When interaction with native speakers is not always possible - e.g. for children learning a second language that is not the societal language - audios are commonly used as an affordable substitute. But does such non-interactive input work? Two experiments evaluated the usefulness of audio storybooks in acquiring a more native-like second-language accent. Young children, first- and second-graders in Hong Kong whose native language was Cantonese Chinese, were given take-home listening assignments in a second language, either English or Putonghua Chinese. Accent ratings of the children's story reading revealed measurable benefits of non-interactive input from native speakers. The benefits were far more robust for Putonghua than English. Implications for second-language accent acquisition are discussed.

  3. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  4. HI-VISUAL: A language supporting visual interaction in programming

    International Nuclear Information System (INIS)

    Monden, N.; Yoshino, Y.; Hirakawa, M.; Tanaka, M.; Ichikawa, T.

    1984-01-01

    This paper presents a language named HI-VISUAL which supports visual interaction in programming. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL are extensively discussed. HI-VISUAL also shows a system extensively discussed. HI-VISUAL also shows a system extendability providing the possibility of organizing a high level application system as an integration of several existing subsystems, and will serve to developing systems in various fields of applications supporting simple and efficient interactions between programmer and computer. In this paper, the authors have presented a language named HI-VISUAL. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL were extensively discussed

  5. Non Linear Dynamics in Language and Psychobiological Interactions

    Science.gov (United States)

    Orsucci, Franco

    Language and thinking give us access to what is usually called natural and social reality. Language and thinking can be considered as parts of a semiotic universe of different entities from which emerge different subsets. Some of these can have peculiar functions in interpersonal interactions and biological transductions. Nonlinear studies at the morphological level of language are opening new perpectives in this area of the Mind-Sciences.

  6. How relevant is social interaction in second language learning?

    Directory of Open Access Journals (Sweden)

    Laura eVerga

    2013-09-01

    Full Text Available Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthen by evidence emerging from different fields, which clearly indicate that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as it is the case for Autism, for example. However, studies on adult second language learning have been mostly focused on individualistic approaches, partly because of methodological constraints especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.

  7. How relevant is social interaction in second language learning?

    Science.gov (United States)

    Verga, Laura; Kotz, Sonja A

    2013-09-03

    Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.

  8. Skype me! Socially contingent interactions help toddlers learn language.

    Science.gov (United States)

    Roseberry, Sarah; Hirsh-Pasek, Kathy; Golinkoff, Roberta M

    2014-01-01

    Language learning takes place in the context of social interactions, yet the mechanisms that render social interactions useful for learning language remain unclear. This study focuses on whether social contingency might support word learning. Toddlers aged 24-30 months (N = 36) were exposed to novel verbs in one of three conditions: live interaction training, socially contingent video training over video chat, and noncontingent video training (yoked video). Results suggest that children only learned novel verbs in socially contingent interactions (live interactions and video chat). This study highlights the importance of social contingency in interactions for language learning and informs the literature on learning through screen media as the first study to examine word learning through video chat technology. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  9. Skype me! Socially Contingent Interactions Help Toddlers Learn Language

    Science.gov (United States)

    Roseberry, Sarah; Hirsh-Pasek, Kathy; Golinkoff, Roberta Michnick

    2013-01-01

    Language learning takes place in the context of social interactions, yet the mechanisms that render social interactions useful for learning language remain unclear. This paper focuses on whether social contingency might support word learning. Toddlers aged 24- to 30-months (N=36) were exposed to novel verbs in one of three conditions: live interaction training, socially contingent video training over video chat, and non-contingent video training (yoked video). Results suggest that children only learned novel verbs in socially contingent interactions (live interactions and video chat). The current study highlights the importance of social contingency in interactions for language learning and informs the literature on learning through screen media as the first study to examine word learning through video chat technology. PMID:24112079

  10. Analysis of IUE spectra using the interactive data language

    Science.gov (United States)

    Joseph, C. L.

    1981-01-01

    The Interactive Data Language (IDL) is used to analyze high resolution spectra from the IUE. Like other interactive languages, IDL is designed for use by the scientist rather than the professional programmer, allowing him to conceive of his data as simple entities and to operate on this data with minimal difficulty. A package of programs created to analyze interstellar absorption lines is presented as an example of the graphical power of IDL.

  11. Teaching Spoken Spanish

    Science.gov (United States)

    Lipski, John M.

    1976-01-01

    The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)

  12. Language development in deaf children’s interactions with deaf and hearing adults. A Dutch longitudinal study

    NARCIS (Netherlands)

    Klatter-Folmer, H.A.K.; Hout, R.W.N.M. van; Kolen, E.; Verhoeven, L.T.W.

    2006-01-01

    The language development of two deaf girls and four deaf boys in Sign Language of the Netherlands (SLN) and spoken Dutch was investigated longitudinally. At the start, the mean age of the children was 3;5. All data were collected in video-recorded semistructured conversations between individual

  13. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  14. Language and Cognition Interaction Neural Mechanisms

    Science.gov (United States)

    2011-06-01

    neurolinguistics,” Behavioral and Brain Sciences, vol. 28, no. 2, pp. 105–124, 2005. [90] W. von Humboldt , Über die Verschiedenheit des menschlichen...wags the dog” as they anchor language sounds and preserve meanings. This, I think is what Humboldt [90] and Lehmann [91]meant by “firmness” of

  15. Design of Feedback in Interactive Multimedia Language Learning Environments

    Directory of Open Access Journals (Sweden)

    Vehbi Türel

    2012-01-01

    Full Text Available In interactive multimedia environments, different digital elements (i. e. video, audio, visuals, text, animations, graphics and glossary can be combined and delivered on the same digital computer screen (TDM 1997: 151, CCED 1987, Brett 1998: 81, Stenton 1998: 11, Mangiafico 1996: 46. This also enables effectively provision and presentation of feedback in pedagogically more efficient ways, which meets not only the requirement of different teaching and learning theories, but also the needs of language learners who vary in their learning-style preferences (Robinson 1991: 156, Peter 1994: 157f.. This study aims to bring out the pedagogical and design principles that might help us to more effectively design and customise feedback in interactive multimedia language learning environments. While so doing, some examples of thought out and customized computerised feedback from an interactive multimedia language learning environment, which were designed and created by the author of this study and were also used for language learning purposes, will be shown.

  16. The role of foreign and indigenous languages in primary schools ...

    African Journals Online (AJOL)

    This article investigates the use of English and other African languages in Kenyan primary schools. English is a .... For a long time, the issue of the medium of instruction, in especially primary schools, has persisted in spite of .... mother tongue, they use this language for spoken classroom interaction in order to bring about.

  17. Coaching Parents to Use Naturalistic Language and Communication Strategies

    Science.gov (United States)

    Akamoglu, Yusuf; Dinnebeil, Laurie

    2017-01-01

    Naturalistic language and communication strategies (i.e., naturalistic teaching strategies) refer to practices that are used to promote the child's language and communication skills either through verbal (e.g., spoken words) or nonverbal (e.g., gestures, signs) interactions between an adult (e.g., parent, teacher) and a child. Use of naturalistic…

  18. Interaction in a Blended Environment for English Language Learning

    Science.gov (United States)

    Romero Archila, Yuranny Marcela

    2014-01-01

    The purpose of this research was to identify the types of interaction that emerged not only in a Virtual Learning Environment (VLE) but also in face-to-face settings. The study also assessed the impact of the different kinds of interactions in terms of language learning. This is a qualitative case study that took place in a private Colombian…

  19. Language Non-Selective Activation of Orthography during Spoken Word Processing in Hindi-English Sequential Bilinguals: An Eye Tracking Visual World Study

    Science.gov (United States)

    Mishra, Ramesh Kumar; Singh, Niharika

    2014-01-01

    Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…

  20. Theorizing Language and Discourse for the Interactional Study of Identities.

    Science.gov (United States)

    Korobov, Neill

    2017-03-01

    The following commentary critically reflects on the pragmatic and semiotic approach to language and identity articulated by Tapia, Rojas, and Picado (Culture & Psychology, Tapia et al. 2017). The following questions are central: 1) What theoretical position is (tacitly) being articulated regarding the nature of language and discourse? Although the authors admit that an explicit theorization of language and discourse is not their focus, the absence of a clear theoretical position is conspicuously problematic. And 2) is there an unintended cognitivism present in the way the authors formulate the relationship between language/discourse and identity? After discussing these questions, select parts of a radical interactional approach, grounded in discursive positioning, will be presented as an amendment to the present work, insofar as it attempts to both articulate a progressive theorization of language and discourse and avoid an unintended slide into cognitivism.

  1. Accessing the spoken word

    OpenAIRE

    Goldman, Jerry; Renals, Steve; Bird, Steven; de Jong, Franciska; Federico, Marcello; Fleischhauer, Carl; Kornbluh, Mark; Lamel, Lori; Oard, Douglas W; Stewart, Claire; Wright, Richard

    2005-01-01

    Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs. This paper outlines the major issues in the field, reviews the current state of technology, examines the rapidly changing policy issues relating to privacy and copyright, and presents issues relati...

  2. The language spoken at home and disparities in medical and dental health, access to care, and use of services in US children.

    Science.gov (United States)

    Flores, Glenn; Tomany-Korman, Sandra C

    2008-06-01

    Fifty-five million Americans speak a non-English primary language at home, but little is known about health disparities for children in non-English-primary-language households. Our study objective was to examine whether disparities in medical and dental health, access to care, and use of services exist for children in non-English-primary-language households. The National Survey of Childhood Health was a telephone survey in 2003-2004 of a nationwide sample of parents of 102 353 children 0 to 17 years old. Disparities in medical and oral health and health care were examined for children in a non-English-primary-language household compared with children in English- primary-language households, both in bivariate analyses and in multivariable analyses that adjusted for 8 covariates (child's age, race/ethnicity, and medical or dental insurance coverage, caregiver's highest educational attainment and employment status, number of children and adults in the household, and poverty status). Children in non-English-primary-language households were significantly more likely than children in English-primary-language households to be poor (42% vs 13%) and Latino or Asian/Pacific Islander. Significantly higher proportions of children in non-English-primary-language households were not in excellent/very good health (43% vs 12%), were overweight/at risk for overweight (48% vs 39%), had teeth in fair/poor condition (27% vs 7%), and were uninsured (27% vs 6%), sporadically insured (20% vs 10%), and lacked dental insurance (39% vs 20%). Children in non-English-primary-language households more often had no usual source of medical care (38% vs 13%), made no medical (27% vs 12%) or preventive dental (14% vs 6%) visits in the previous year, and had problems obtaining specialty care (40% vs 23%). Latino and Asian children in non-English-primary-language households had several unique disparities compared with white children in non-English-primary-language households. Almost all disparities

  3. English Language Learners interactions with various science curriculum features

    Science.gov (United States)

    Norland, Jennifer Jane

    2005-12-01

    The purpose of this study was to examine the interactions of eighth grade English Language Learners in an inclusive science classroom. There is a paucity of research in this area. Central to this study was the students' perceptions and interactions with five different science curriculum features; teacher presentation and guided notes, worksheets, homework, labs, and practice and review activities. The student participants were English Language Learners from two language proficiency levels and the teacher was a provisionally licensed first year science teacher. The aggregate data included individual interviews with the students and teacher, classroom observations, and the collection of classroom artifacts. The findings revealed: (a) students' comprehension of the material was inconsistent throughout all of the curriculum features and differences were observed not only between but also within the two proficiency levels; (b) classroom organizational issues created challenges for both the teacher and the students; (c) off task behavior was most prevalent during the teacher's one-to-one instruction and interfered with learning; (d) differences between levels of language proficiency were observed among students who preferred to work independently and were comfortable asking the teacher for assistance and the students who preferred working with and receiving assistance from peers; and (e) language proficiency rather than cultural differences appeared to be the greatest barrier to classroom success. Overall, English language proficiency was a crucial determinant in the English Language Learners success in the inclusive classroom. Additionally, implications suggest that a limited teaching skill set could adversely affect the success of students in inclusive classrooms.

  4. Fourth International Workshop on Spoken Dialog Systems

    CERN Document Server

    Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice

    2014-01-01

    These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.

  5. Promoting Interaction through Blogging in Language Classrooms

    Science.gov (United States)

    Gunduz, Muge

    2016-01-01

    This study aims to explore the university students' perception on integration of blogging in EFL classes. In this study, the participants were first year university students (n=103) who created their group blogs in order to share their blog entries during their oral communication classes. Students interacted with their peers via blogs simply by…

  6. The embodied turn in research on language and social interaction

    DEFF Research Database (Denmark)

    Nevile, Maurice

    2015-01-01

    I use the term the embodied turn to mean the point when interest in the body became established among researchers on language and social interaction, exploiting the greater ease of video-recording. This review paper tracks the growth of "embodiment" in over 400 papers published in Research...... on Language and Social Interaction from 1987-2013. I consider closely two areas where analysts have confronted challenges, and how they have responded: settling on precise and analytically helpful terminology for the body; and transcribing and representing the body, particularly its temporality and manner....

  7. SIGMA, a new language for interactive array-oriented computing

    International Nuclear Information System (INIS)

    Hagedorn, R.; Reinfelds, J.; Vandoni, C.; Hove, L. van.

    1978-01-01

    A description is given of the principles and the main facilities of SIGMA (System for Interactive Graphical Mathematical Applications), a programming language for scientific computing whose major characteristics are: automatic handling of multi-dimensional rectangular arrays as basic data units, interactive operation of the system, and graphical display facilities. After introducing the basic concepts and features of the language, it describes in some detail the methods and operators for the automatic handling of arrays and for their graphical display, the procedures for construction of programs by users, and other facilities of the system. The report is a new version of CERN 73-5. (Auth.)

  8. The impact of law and language as interactive patterns

    Directory of Open Access Journals (Sweden)

    Marina Kaishi

    2016-07-01

    Full Text Available Every country has adopted a certain law pattern. This has an impact on the language expression and the relevant adopted terminology. It can be tracked by examining and describing the lexical choices and the use of featuring structures, which form parallelisms in similar systems. Before proceeding with their linguistic description, it is necessary to explain the differences that exist between Greek-, French-, German-, Albanian law systems. It will be evident that they have some points in common, but at the same time they differ at a great extent in the way of conceptualizing the system. I shall use the Constitution as the basic law and a safe reference point for an explicit comparison. Terminology plays an important role in explaining these systems. The law & language are interactive patterns. We already have a European legal language, but it is time for a more coherent European wide legal language. The linguistic matters have a direct contact with judicial cases. Inside EU the usage of different languages is one of the main obstacles of the integration process. Then again according to EU it creates a specific problem for the European judges, translators and interpreters. So in order to achieve a co-usage of the language we need to develop a curriculum, in order to use a coherent terminology and linguistic patterns. To put a standard for the law language, used in the EU, we should follow a legal harmonization that is achieved through harmonized terminology inside EU. The right usage of the language and its terminology should be understood as a standardization process. Also European Union policy is of great importance because it informs us about language policy and how to deal with it. At last we must know that EU consists of 450 million people from different cultures and backgrounds. In this sense it can be said that EU is truly a multilingual institution that reinforces the ideal of a single community with different languages and different

  9. Steering the conversation: A linguistic exploration of natural language interactions with a digital assistant during simulated driving.

    Science.gov (United States)

    Large, David R; Clark, Leigh; Quandt, Annie; Burnett, Gary; Skrypchuk, Lee

    2017-09-01

    Given the proliferation of 'intelligent' and 'socially-aware' digital assistants embodying everyday mobile technology - and the undeniable logic that utilising voice-activated controls and interfaces in cars reduces the visual and manual distraction of interacting with in-vehicle devices - it appears inevitable that next generation vehicles will be embodied by digital assistants and utilise spoken language as a method of interaction. From a design perspective, defining the language and interaction style that a digital driving assistant should adopt is contingent on the role that they play within the social fabric and context in which they are situated. We therefore conducted a qualitative, Wizard-of-Oz study to explore how drivers might interact linguistically with a natural language digital driving assistant. Twenty-five participants drove for 10 min in a medium-fidelity driving simulator while interacting with a state-of-the-art, high-functioning, conversational digital driving assistant. All exchanges were transcribed and analysed using recognised linguistic techniques, such as discourse and conversation analysis, normally reserved for interpersonal investigation. Language usage patterns demonstrate that interactions with the digital assistant were fundamentally social in nature, with participants affording the assistant equal social status and high-level cognitive processing capability. For example, participants were polite, actively controlled turn-taking during the conversation, and used back-channelling, fillers and hesitation, as they might in human communication. Furthermore, participants expected the digital assistant to understand and process complex requests mitigated with hedging words and expressions, and peppered with vague language and deictic references requiring shared contextual information and mutual understanding. Findings are presented in six themes which emerged during the analysis - formulating responses; turn-taking; back

  10. Sign language: an international handbook

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Woll, B.

    2012-01-01

    Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of

  11. Analysis of event-mode data with Interactive Data Language

    International Nuclear Information System (INIS)

    De Young, P.A.; Hilldore, B.B.; Kiessel, L.M.; Peaslee, G.F.

    2003-01-01

    We have developed an analysis package for event-mode data based on Interactive Data Language (IDL) from Research Systems Inc. This high-level language is high speed, array oriented, object oriented, and has extensive visual (multi-dimensional plotting) and mathematical functions. We have developed a general framework, written in IDL, for the analysis of a variety of experimental data that does not require significant customization for each analysis. Unlike many traditional analysis package, spectra and gates are applied after data are read and are easily changed as analysis proceeds without rereading the data. The events are not sequentially processed into predetermined arrays subject to predetermined gates

  12. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  13. Multilingual Interaction and Minority Languages: Proficiency and Language Practices in Education and Society

    Science.gov (United States)

    Gorter, Durk

    2015-01-01

    In this plenary speech I examine multilingual interaction in a number of European regions in which minority languages are being revitalized. Education is a crucial variable, but the wider society is equally significant. The context of revitalization is no longer bilingual but increasingly multilingual. I draw on the results of a long-running…

  14. Use Your Languages! From Monolingual to Multilingual Interaction in a Language Class

    Science.gov (United States)

    Kyppö, Anna; Natri, Teija; Pietarinen, Margarita; Saaristo, Pekka

    2015-01-01

    This reflective paper presents a new course concept for multilingual interaction, which was piloted at the University of Jyväskylä Language Centre in the spring of 2014. The course, implemented as part of the centre's action research, is the result of a development process aimed at enhancing students' multilingual and multicultural academic…

  15. Interactivity in Second Language via Social Identity and Group Cohesiveness

    Directory of Open Access Journals (Sweden)

    Roberto Rojas Alfaro

    2013-07-01

    Full Text Available Se describen y analizan la influencia de la identidad y la unión de grupo como factores que facilitan o dificultan los procesos interactivos en el aprendizaje del inglés como segunda lengua. Se señala la conexión entre el aprendizaje interactivo de un idioma y factores como identidad social, personal, y unión de grupo. El efecto de la integración del grupo y la identidad en el aprendizaje de un segundo idioma son esenciales dado que pocos estudios se han referido al efecto de tales variables en la interacción de grupo. Con el estudio de un caso realizado en dos grupos de estudiantes adultos se diagnosticó el estado de cohesión del grupo y su impacto en el aprendizaje interactivo. This research explores the influence of identity and group cohesion as factors that facilitate or hinder interactive processes in ESL classrooms. In particular, this paper addresses the connection between interactive language learning, social and personal identity, and group cohesiveness. The effect of group cohesion and identity in second language learning has been addressed in relatively few studies on the impact of those membership variables in determining interactivity in communicative language teaching. A case study carried out in two college level classes diagnosed the status of group membership and its impact on interactivity.

  16. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  17. Interactions between working memory and language in young children with specific language impairment (SLI).

    Science.gov (United States)

    Vugs, Brigitte; Knoors, Harry; Cuperus, Juliane; Hendriks, Marc; Verhoeven, Ludo

    2016-01-01

    The underlying structure of working memory (WM) in young children with and without specific language impairment (SLI) was examined. The associations between the components of WM and the language abilities of young children with SLI were then analyzed. The Automated Working Memory Assessment and four linguistic tasks were administered to 58 children with SLI and 58 children without SLI, aged 4-5 years. The WM of the children was best represented by a model with four separate but interacting components of verbal storage, visuospatial storage, verbal central executive (CE), and visuospatial CE. The associations between the four components of WM did not differ significantly for the two groups of children. However, the individual components of WM showed varying associations with the language abilities of the children with SLI. The verbal CE component of WM was moderately to strongly associated with all the language abilities in children with SLI: receptive vocabulary, expressive vocabulary, verbal comprehension, and syntactic development. These results show verbal CE to be involved in a wide range of linguistic skills; the limited ability of young children with SLI to simultaneously store and process verbal information may constrain their acquisition of linguistic skills. Attention should thus be paid to the language problems of children with SLI, but also to the WM impairments that can contribute to their language problems.

  18. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  19. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  20. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  1. Flipper: An Information State Component for Spoken Dialogue Systems

    NARCIS (Netherlands)

    ter Maat, Mark; Heylen, Dirk K.J.; Vilhjálmsson, Hannes; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn

    This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.

  2. Pair Counting to Improve Grammar and Spoken Fluency

    Science.gov (United States)

    Hanson, Stephanie

    2017-01-01

    English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…

  3. The Link between Vocabulary Knowledge and Spoken L2 Fluency

    Science.gov (United States)

    Hilton, Heather

    2008-01-01

    In spite of the vast numbers of articles devoted to vocabulary acquisition in a foreign language, few studies address the contribution of lexical knowledge to spoken fluency. The present article begins with basic definitions of the temporal characteristics of oral fluency, summarizing L1 research over several decades, and then presents fluency…

  4. Phonological Interference in the Spoken English Performance of the ...

    African Journals Online (AJOL)

    This paper sets out to examine the phonological interference in the spoken English performance of the Izon speaker. It emphasizes that the level of interference is not just as a result of the systemic differences that exist between both language systems (Izon and English) but also as a result of the interlanguage factors such ...

  5. An Analysis of Spoken Grammar: The Case for Production

    Science.gov (United States)

    Mumford, Simon

    2009-01-01

    Corpus-based grammars, notably "Cambridge Grammar of English," give explicit information on the forms and use of native-speaker grammar, including spoken grammar. Native-speaker norms as a necessary goal in language teaching are contested by supporters of English as a Lingua Franca (ELF); however, this article argues for the inclusion of selected…

  6. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  7. Spoken Persuasive Discourse Abilities of Adolescents with Acquired Brain Injury

    Science.gov (United States)

    Moran, Catherine; Kirk, Cecilia; Powell, Emma

    2012-01-01

    Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…

  8. Gendered Teacher–Student Interactions in English Language Classrooms

    Directory of Open Access Journals (Sweden)

    Jaleh Hassaskhah

    2013-09-01

    Full Text Available Being and becoming is the ultimate objective of any educational enterprise, including language teaching. However, research results indicate seemingly unjustified differences between how females and males are treated by EFL (English as a Foreign Language teachers. The overall aim of this study is to illustrate, analyze, and discuss aspects of gender bias and gender awareness in teacher–student interaction in the Iranian college context. To this end, teacher–student interactions of 20 English teachers and 500 students were investigated from the perspective of gender theory. The data were obtained via classroom observations, a seating chart and the audio-recording of all classroom interactions during the study. The findings, obtained from the quantitative descriptive statistics and chi-square methods, as well as the qualitative analysis by way of open and selective coding, uncovered that there were significant differences in the quantity and quality of the interaction for females and males in almost all categories of interaction. The study also revealed teachers’ perception of “gender,” the problems they associate with gender, and the attitudes they have to gender issues. Apparently, while positive incentives are able to facilitate learner growth, the presence of any negative barrier such as gender bias is likely to hinder development. This has implications for teachers, and faculty members who favor healthy and gender-neutral educational climate.

  9. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...

  10. Do children with specific language impairment and autism spectrum disorders benefit from the presence of orthography when learning new spoken words?

    Science.gov (United States)

    Ricketts, Jessie; Dockrell, Julie E; Patel, Nita; Charman, Tony; Lindsay, Geoff

    2015-06-01

    This experiment investigated whether children with specific language impairment (SLI), children with autism spectrum disorders (ASD), and typically developing children benefit from the incidental presence of orthography when learning new oral vocabulary items. Children with SLI, children with ASD, and typically developing children (n=27 per group) between 8 and 13 years of age were matched in triplets for age and nonverbal reasoning. Participants were taught 12 mappings between novel phonological strings and referents; half of these mappings were trained with orthography present and half were trained with orthography absent. Groups did not differ on the ability to learn new oral vocabulary, although there was some indication that children with ASD were slower than controls to identify newly learned items. During training, the ASD, SLI, and typically developing groups benefited from orthography to the same extent. In supplementary analyses, children with SLI were matched in pairs to an additional control group of younger typically developing children for nonword reading. Compared with younger controls, children with SLI showed equivalent oral vocabulary acquisition and benefit from orthography during training. Our findings are consistent with current theoretical accounts of how lexical entries are acquired and replicate previous studies that have shown orthographic facilitation for vocabulary acquisition in typically developing children and children with ASD. We demonstrate this effect in SLI for the first time. The study provides evidence that the presence of orthographic cues can support oral vocabulary acquisition, motivating intervention approaches (as well as standard classroom teaching) that emphasize the orthographic form. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Child second language interaction in science-based tasks

    Science.gov (United States)

    McPhail, Cynthia Leigh

    While quasi-experimental in design, this study utilized qualitative data collection and analysis methods to examine the questions of whether students' speech act behavior and language use would vary by linguistic grouping. Second grade Puerto Rican native speakers of Spanish, and native English speakers completed sets of paired, hands-on, science activities. Children were paired in two linguistic groupings: heterogeneous (English native speaker/non-native speaker), and homogeneous (English non-native speaker/non-native speaker, or English native speaker/native speaker). Speech acts and use of target and native language in the two linguistic groupings were compared. Interviews with both the students and their teachers provided further understanding of the speech act behavior. Most prior research has dealt with university level adults learning English. Previous research that has dealt with children and second language interaction has often focused on teacher talk directed to the children, and no child/child interaction studies have attempted to control for variables such as linguistic grouping. Results indicated that linguistically heterogeneous groupings led to higher percentages of English use for non-native speakers. Homogeneous grouping led to higher percentages of native Spanish use. English native speakers' speech act behavior remained consistent in terms of dominance or passivity of behavior regardless of linguistic grouping, but there is the possibility that non-English speakers may behave in a slightly more passive manner when in heterogeneous grouping.

  12. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    Science.gov (United States)

    2016-05-03

    resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within

  13. Spoken English and the question of grammar: the role of the functional model

    OpenAIRE

    Coffin, Caroline

    2003-01-01

    Given the nature of spoken text, the first requirement of an appropriate grammar is its ability to account for stretches of language (including recurring types of text or genres), in addition to clause level patterns. Second, the grammatical model needs to be part of a wider theory of language that recognises the functional nature and educational purposes of spoken text. The model also needs to be designed in a\\ud sufficiently comprehensive way so as to account for grammatical forms in speech...

  14. CONVERTING RETRIEVED SPOKEN DOCUMENTS INTO TEXT USING AN AUTO ASSOCIATIVE NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2016-06-01

    Full Text Available This paper frames a novel methodology for spoken document information retrieval to the spontaneous speech corpora and converting the retrieved document into the corresponding language text. The proposed work involves the three major areas namely spoken keyword detection, spoken document retrieval and automatic speech recognition. The keyword spotting is concerned with the exploit of the distribution capturing capability of the Auto Associative Neural Network (AANN for spoken keyword detection. It involves sliding a frame-based keyword template along the audio documents and by means of confidence score acquired from the normalized squared error of AANN to search for a match. This work benevolences a new spoken keyword spotting algorithm. Based on the match the spoken documents are retrieved and clustered together. In speech recognition step, the retrieved documents are converted into the corresponding language text using the AANN classifier. The experiments are conducted using the Dravidian language database and the results recommend that the proposed method is promising for retrieving the relevant documents of a spoken query as a key and transform it into the corresponding language.

  15. The Co-Emergence of Cognition, Language, and Speech Motor Control in Early Development: A Longitudinal Correlation Study

    Science.gov (United States)

    Nip, Ignatius S. B.; Green, Jordan R.; Marx, David B.

    2011-01-01

    Although the development of spoken language is dependent on the emergence of cognitive, language, and speech motor skills, knowledge about how these domains interact during the early stages of communication development is currently limited. This exploratory investigation examines the strength of associations between longitudinal changes in…

  16. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  17. Acquisition Of Language And Communication Skills As Essential ...

    African Journals Online (AJOL)

    Early acquisition of language and communication skills enhances and promotes effective communication and interactive processes among children who are visually impaired. These children require communication power for them to function effectively in spoken words among sighted persons. Acquisition of listening skills ...

  18. Encoding lexical tones in jTRACE: a simulation of monosyllabic spoken word recognition in Mandarin Chinese.

    Science.gov (United States)

    Shuai, Lan; Malins, Jeffrey G

    2017-02-01

    Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.

  19. Language, interactivity and solution probing: repetition without repetition

    DEFF Research Database (Denmark)

    Cowley, Stephen; Nash, Luarina

    2013-01-01

    is presented though a case study where a person solves a problem and, in so doing relies on non-local aspects of the ecology as well as his observer's mental domain. Like Anthony Chemero we make links with ecological psychology to emphasize how embodiment draws on cultural resources as people concert thinking......, action and perception. We trace this to human interactivity or sense-saturated coordination that renders possible language and human forms of cognition: it links human sense-making to historical experience. People play roles with natural and cultural artifacts as they act, animate groups and live through...

  20. Language used in interaction during developmental science instruction

    Science.gov (United States)

    Avenia-Tapper, Brianna

    The coordination of theory and evidence is an important part of scientific practice. Developmental approaches to instruction, which make the relationship between the abstract and the concrete a central focus of students' learning activity, provide educators with a unique opportunity to strengthen students' coordination of theory and evidence. Therefore, developmental approaches may be a useful instructional response to documented science achievement gaps for linguistically diverse students. However, if we are to leverage the potential of developmental instruction to improve the science achievement of linguistically diverse students, we need more information on the intersection of developmental science instruction and linguistically diverse learning contexts. This manuscript style dissertation uses discourse analysis to investigate the language used in interaction during developmental teaching-learning in three linguistically diverse third grade classrooms. The first manuscript asks how language was used to construct ascension from the abstract to the concrete. The second manuscript asks how students' non-English home languages were useful (or not) for meeting the learning goals of the developmental instructional program. The third manuscript asks how students' interlocutors may influence student choice to use an important discourse practice--justification--during the developmental teaching-learning activity. All three manuscripts report findings relevant to the instructional decisions that teachers need to make when implementing developmental instruction in linguistically diverse contexts.

  1. User-Centred Design for Chinese-Oriented Spoken English Learning System

    Science.gov (United States)

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  2. INTERCONNECTION AND INTERACTION OF INTERROGATIVE SENTENCES IN THE ENGLISH LANGUAGE

    Directory of Open Access Journals (Sweden)

    Natalia Sklyarova

    2013-12-01

    Full Text Available Normal 0 false false false RU X-NONE X-NONE MicrosoftInternetExplorer4 This paper presents the results of research devoted to one of significant aspects of interrogative sentences. The precise definitions of interconnection and interaction and the application of these terms to the language units helped to distinguish between interconnection and interaction of interrogative sentences in English. The existence of two different kinds of relations in the language, namely paradigmatic and syntagmatic, provided the basis for singling out two corresponding forms of interaction of English interrogative sentences. Contextual and distributional analyses of the material from authentic sources enabled to characterize the range and degree of their paradigmatic and syntagmatic interaction. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Обычная таблица"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

  3. Interactive data language (IDL) for medical image processing

    International Nuclear Information System (INIS)

    Md Saion Salikin

    2002-01-01

    Interactive Data Language (IDL) is one of many softwares available in the market for medical image processing and analysis. IDL is a complete, structured language that can be used both interactively and to create sophisticated functions, procedures, and applications. It provides a suitable processing routines and display method which include animation, specification of colour table including 24-bit capability, 3-D visualization and many graphic operation. The important features of IDL for medical imaging are segmentation, visualization, quantification and pattern recognition. In visualization IDL is capable of allowing greater precision and flexibility when visualizing data. For example, IDL eliminates the limits on Number of Contour level. In term of data analysis, IDL is capable of handling complicated functions such as Fast Fourier Transform (FFT) function, Hough and Radon Transform function, Legendre Polynomial function, as well as simple functions such as Histogram function. In pattern recognition, pattern description is defined as points rather than pixels. With this functionality, it is easy to re-use the same pattern on more than one destination device (even if the destinations have varying resolution). In other words it have the ability to specify values in points. However there are a few disadvantages of using IDL. Licensing is by dongkel key and limited licences hence limited access to potential IDL users. A few examples are shown to demonstrate the capabilities of IDL in carrying out its function for medical image processing. (Author)

  4. Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery

    Directory of Open Access Journals (Sweden)

    Andrea Hudáková

    2017-11-01

    Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.

  5. Directionality Effects in Simultaneous Language Interpreting: The Case of Sign Language Interpreters in the Netherlands

    Science.gov (United States)

    van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…

  6. Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.

    Science.gov (United States)

    Brimo, Danielle; Lund, Emily; Sapp, Alysha

    2017-12-18

    Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below

  7. Developing a corpus of spoken language variability

    Science.gov (United States)

    Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford

    2003-10-01

    We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.

  8. Input to interaction to instruction: three key shifts in the history of child language research.

    Science.gov (United States)

    Snow, Catherine E

    2014-07-01

    In the early years of the Journal of Child Language, there was considerable disagreement about the role of language input or adult-child interaction in children's language acquisition. The view that quantity and quality of input to language-learning children is relevant to their language development has now become widely accepted as a principle guiding advice to parents and the design of early childhood education programs, even if it is not yet uncontested in the field of language development. The focus on variation in the language input to children acquires particular educational relevance when we consider variation in access to academic language - features of language particularly valued in school and related to success in reading and writing. Just as many children benefit from language environments that are intentionally designed to ensure adequate quantity and quality of input, even more probably need explicit instruction in the features of language that characterize its use for academic purposes.

  9. Classroom Interaction in Teaching English as Foreign Language at Lower Secondary Schools in Indonesia

    Directory of Open Access Journals (Sweden)

    Hanna Sundari

    2017-12-01

    Full Text Available The aim of this study was to develop a deep understanding of interaction in language classroom in foreign language context. Interviews, as major instrument, to twenty experienced English language teachers from eight lower secondary schools (SMP were conducted in Jakarta, completed by focus group discussions and class observation/recordings. The gathered data was analyzed according to systematic design of grounded theory analysis method through 3-phase coding. A model of classroom interaction was formulated defining several dimensions in interaction. Classroom interaction can be more comprehended under the background of interrelated factors: interaction practices, teacher and student factors, learning objectives, materials, classroom contexts, and outer contexts surrounding the interaction practices. The developed model of interaction for language classroom is notably to give deep descriptions on how interaction substantially occurs and what factors affect it in foreign language classrooms at lower secondary schools from teachers’ perspectives.

  10. DIFFERENCES BETWEEN AMERICAN SIGN LANGUAGE (ASL AND BRITISH SIGN LANGUAGE (BSL

    Directory of Open Access Journals (Sweden)

    Zora JACHOVA

    2008-06-01

    Full Text Available In the communication of deaf people between them­selves and hearing people there are three ba­sic as­pects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The ges­ture and the move­ments pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them pho­nemes. He created three main phoneme catego­ries: hand position, location and movement.Sign languages as spoken languages have back­ground from the distant past. They developed par­allel with the development of spoken language and undertook many historical changes. Therefore, to­day they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.

  11. Embodying multilingual interaction

    DEFF Research Database (Denmark)

    Hazel, Spencer; Mortensen, Janus

    this linguistic diversity is managed in situ by participants engaged in dialogue with one another, and what it is used for in these transient multilingual communities. This paper presents CA-based micro-ethnographic analyses of language choice in an informal social setting – a kitchen – of an international study...... literature on language choice in interaction, our findings emphasize that analyses of language choice in multilingual settings need to take into account social actions beyond the words that are spoken. We show that facial, spatial and postural configurations, gaze orientation and gestures as well as prosodic...... in the particular community of practice that we are investigating. Reference Hazel, Spencer, and Janus Mortensen. forthcoming. Kitchen talk: Exploring linguistic practices in liminal institutional interactions in a multilingual university setting. in Language Alternation, Language Choice, and Language Encounter...

  12. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  13. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  14. Correlative Conjunctions in Spoken Texts

    Czech Academy of Sciences Publication Activity Database

    Poukarová, Petra

    2017-01-01

    Roč. 68, č. 2 (2017), s. 305-315 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : correlative conjunctions * spoken Czech * cohesion Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf

  15. Caregiver-Child Verbal Interactions in Child Care: A Buffer against Poor Language Outcomes when Maternal Language Input is Less.

    Science.gov (United States)

    Vernon-Feagans, Lynne; Bratsch-Hines, Mary E

    2013-12-01

    Recent research has suggested that high quality child care can buffer young children against poorer cognitive and language outcomes when they are at risk for poorer language and readiness skills. Most of this research measured the quality of parenting and the quality of the child care with global observational measures or rating scales that did not specify the exact maternal or caregiver behaviors that might be causally implicated in the buffering of these children from poor outcomes. The current study examined the actual language by the mother to her child in the home and the verbal interactions between the caregiver and child in the child care setting that might be implicated in the buffering effect of high quality childcare. The sample included 433 rural children from the Family Life Project who were in child care at 36 months of age. Even after controlling for a variety of covariates, including maternal education, income, race, child previous skill, child care type, the overall quality of the home and quality of the child care environment; observed positive caregiver-child verbal interactions in the child care setting interacted with the maternal language complexity and diversity in predicting children's language development. Caregiver-child positive verbal interactions appeared to buffer children from poor language outcomes concurrently and two years later if children came from homes where observed maternal language complexity and diversity during a picture book task was less.

  16. Physical Interactive Game for Enhancing Language Cognitive Development of Thai Pre-Schooler

    Science.gov (United States)

    Choosri, Noppon; Pookao, Chompoonut

    2017-01-01

    The intervention for cognitive language development is required to conduct at the young ages. As children usually gain the skill through their plays, this study proposed a physical interactive game to help children improve their language skill in both Thai and English language for pre-schooler. The motivation of this research is to create a game…

  17. English Language Teacher Educator Interactional Styles: Heterogeneity and Homogeneity in the ELTE Classroom

    Science.gov (United States)

    Lucero, Edgar; Scalante-Morales, Jeesica

    2018-01-01

    This article presents a research study on the interactional styles of teacher educators in the English language teacher education classroom. Two research methodologies, ethnomethodological conversation analysis and self-evaluation of teacher talk were applied to analyze 34 content- and language-based classes of nine English language teacher…

  18. Episodic grammar: a computational model of the interaction between episodic and semantic memory in language processing

    NARCIS (Netherlands)

    Borensztajn, G.; Zuidema, W.; Carlson, L.; Hoelscher, C.; Shipley, T.F.

    2011-01-01

    We present a model of the interaction of semantic and episodic memory in language processing. Our work shows how language processing can be understood in terms of memory retrieval. We point out that the perceived dichotomy between rule-based versus exemplar-based language modelling can be

  19. An Aspect of Social Interaction in Communication: Politeness Strategies and Contrastive Foreign-Language Teaching.

    Science.gov (United States)

    Slama-Cazacu, Tatiana

    A discussion of communicative interaction focuses on the knowledge needed to achieve politeness in different languages, especially how that body of knowledge differs across languages and can be taught in foreign language instruction. It is noted that oral communication must accommodate the existing social order by use of appropriate registers.…

  20. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  1. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers With Down Syndrome.

    Science.gov (United States)

    Yoder, Paul J; Woynaroski, Tiffany; Fey, Marc E; Warren, Steven F; Gardner, Elizabeth

    2015-07-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only the participants with DS, we found that more therapy led to larger spoken vocabularies at posttreatment because it increased children's canonical syllabic communication and receptive vocabulary growth early in the treatment phase.

  2. Interactive computing in BASIC an introduction to interactive computing and a practical course in the BASIC language

    CERN Document Server

    Sanderson, Peter C

    1973-01-01

    Interactive Computing in BASIC: An Introduction to Interactive Computing and a Practical Course in the BASIC Language provides a general introduction to the principles of interactive computing and a comprehensive practical guide to the programming language Beginners All-purpose Symbolic Instruction Code (BASIC). The book starts by providing an introduction to computers and discussing the aspects of terminal usage, programming languages, and the stages in writing and testing a program. The text then discusses BASIC with regard to methods in writing simple arithmetical programs, control stateme

  3. Teaching and Learning Foreign Languages via System of “Voice over internet protocol” and Language Interactions Case Study: Skype

    Directory of Open Access Journals (Sweden)

    Wazira Ali Abdul Wahid

    2015-04-01

    Full Text Available This issue expresses a research study based on the online interactions of English teaching specially conversation through utilizing VOIP (Voice over Internet Protocol and cosmopolitan online theme. Data has been achieved by interviews. Simplifiers indicate how oral tasks require to be planned upon to facilitate engagement models propitious to language interactions and learning. Collected proficiencies and feature presumably change it to be the best workout which is emanated over two analyzed interviews. Several indications according to utilizing vocal conferencing aim to expand the oral performance in a foreign language interaction. Keywords: VOIP, CFs, EFL, Skype

  4. Assessing the Effectiveness of Parent-Child Interaction Therapy with Language Delayed Children: A Clinical Investigation

    Science.gov (United States)

    Falkus, Gila; Tilley, Ciara; Thomas, Catherine; Hockey, Hannah; Kennedy, Anna; Arnold, Tina; Thorburn, Blair; Jones, Katie; Patel, Bhavika; Pimenta, Claire; Shah, Rena; Tweedie, Fiona; O'Brien, Felicity; Leahy, Ruth; Pring, Tim

    2016-01-01

    Parent-child interaction therapy (PCIT) is widely used by speech and language therapists to improve the interactions between children with delayed language development and their parents/carers. Despite favourable reports of the therapy from clinicians, little evidence of its effectiveness is available. We investigated the effects of PCIT as…

  5. Image-Language Interaction in Online Reading Environments: Challenges for Students' Reading Comprehension

    Science.gov (United States)

    Chan, Eveline; Unsworth, Len

    2011-01-01

    This paper presents the qualitative results of a study of students' reading of multimodal texts in an interactive, online environment. The study forms part of a larger project which addressed image-language interaction as an important dimension of language pedagogy and assessment for students growing up in a multimedia digital age. Thirty-two Year…

  6. Interaction between lexical and grammatical language systems in the brain

    Science.gov (United States)

    Ardila, Alfredo

    2012-06-01

    This review concentrates on two different language dimensions: lexical/semantic and grammatical. This distinction between a lexical/semantic system and a grammatical system is well known in linguistics, but in cognitive neurosciences it has been obscured by the assumption that there are several forms of language disturbances associated with focal brain damage and hence language includes a diversity of functions (phoneme discrimination, lexical memory, grammar, repetition, language initiation ability, etc.), each one associated with the activity of a specific brain area. The clinical observation of patients with cerebral pathology shows that there are indeed only two different forms of language disturbances (disturbances in the lexical/semantic system and disturbances in the grammatical system); these two language dimensions are supported by different brain areas (temporal and frontal) in the left hemisphere. Furthermore, these two aspects of the language are developed at different ages during child's language acquisition, and they probably appeared at different historical moments during human evolution. Mechanisms of learning are different for both language systems: whereas the lexical/semantic knowledge is based in a declarative memory, grammatical knowledge corresponds to a procedural type of memory. Recognizing these two language dimensions can be crucial in understanding language evolution and human cognition.

  7. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  8. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  9. The Skilled Use of Interaction Strategies: Creating a Framework for Improved Small-Group Communicative Interaction in the Language Classroom.

    Science.gov (United States)

    Bejarano, Yael; And Others

    1997-01-01

    Focuses on the need to provide English-as-a-Second-Language learners with preparatory training to ensure more effective communicative interaction during group work conducted in the classroom. Findings indicate that Israeli students exposed to training in the skilled use of interaction strategies used significantly more modified-interaction and…

  10. Bilingualism alters brain functional connectivity between "control" regions and "language" regions: Evidence from bimodal bilinguals.

    Science.gov (United States)

    Li, Le; Abutalebi, Jubin; Zou, Lijuan; Yan, Xin; Liu, Lanfang; Feng, Xiaoxia; Wang, Ruiming; Guo, Taomei; Ding, Guosheng

    2015-05-01

    Previous neuroimaging studies have revealed that bilingualism induces both structural and functional neuroplasticity in the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (LCN), both of which are associated with cognitive control. Since these "control" regions should work together with other language regions during language processing, we hypothesized that bilingualism may also alter the functional interaction between the dACC/LCN and language regions. Here we tested this hypothesis by exploring the functional connectivity (FC) in bimodal bilinguals and monolinguals using functional MRI when they either performed a picture naming task with spoken language or were in resting state. We found that for bimodal bilinguals who use spoken and sign languages, the FC of the dACC with regions involved in spoken language (e.g. the left superior temporal gyrus) was stronger in performing the task, but weaker in the resting state as compared to monolinguals. For the LCN, its intrinsic FC with sign language regions including the left inferior temporo-occipital part and right inferior and superior parietal lobules was increased in the bilinguals. These results demonstrate that bilingual experience may alter the brain functional interaction between "control" regions and "language" regions. For different control regions, the FC alters in different ways. The findings also deepen our understanding of the functional roles of the dACC and LCN in language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Serbian heritage language schools in the Netherlands through the eyes of the parents

    NARCIS (Netherlands)

    Palmen, Andrej

    It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the

  12. Language use in the informed consent discussion for emergency procedures.

    Science.gov (United States)

    McCarthy, Danielle M; Leone, Katrina A; Salzman, David H; Vozenilek, John A; Cameron, Kenzie A

    2012-01-01

    The field of health literacy has closely examined the readability of written health materials to optimize patient comprehension. Few studies have examined spoken communication in a way that is comparable to analyses of written communication. The study objective was to characterize the structural elements of residents' spoken words while obtaining informed consent. Twenty-six resident physicians participated in a simulated informed consent discussion with a standardized patient. Audio recordings of the discussions were transcribed and analyzed to assess grammar statistics for evaluating language complexity (e.g., reading grade level). Transcripts and time values were used to assess structural characteristics of the dialogue (e.g., interactivity). Discussions were characterized by physician verbal dominance. The discussions were interactive but showed significant differences between the physician and patient speech patterns for all language complexity metrics. In this study, physicians spoke significantly more and used more complex language than the patients.

  13. Linguistic adaptations during spoken and multimodal error resolution.

    Science.gov (United States)

    Oviatt, S; Bernard, J; Levow, G A

    1998-01-01

    Fragile error handling in recognition-based systems is a major problem that degrades their performance, frustrates users, and limits commercial potential. The aim of the present research was to analyze the types and magnitude of linguistic adaptation that occur during spoken and multimodal human-computer error resolution. A semiautomatic simulation method with a novel error-generation capability was used to collect samples of users' spoken and pen-based input immediately before and after recognition errors, and at different spiral depths in terms of the number of repetitions needed to resolve an error. When correcting persistent recognition errors, results revealed that users adapt their speech and language in three qualitatively different ways. First, they increase linguistic contrast through alternation of input modes and lexical content over repeated correction attempts. Second, when correcting with verbatim speech, they increase hyperarticulation by lengthening speech segments and pauses, and increasing the use of final falling contours. Third, when they hyperarticulate, users simultaneously suppress linguistic variability in their speech signal's amplitude and fundamental frequency. These findings are discussed from the perspective of enhancement of linguistic intelligibility. Implications are also discussed for corroboration and generalization of the Computer-elicited Hyperarticulate Adaptation Model (CHAM), and for improved error handling capabilities in next-generation spoken language and multimodal systems.

  14. Web Delivery of Adaptive and Interactive Language Tutoring: Revisited

    Science.gov (United States)

    Heift, Trude

    2016-01-01

    This commentary reconsiders the description and assessment of the design and implementation of "German Tutor," an Intelligent Language Tutoring System (ILTS) for learners of German as a foreign language, published in 2001. Based on our experience over the past 15 years with the design and real classroom use of an ILTS, we address a…

  15. VOCATION OF LANGUAGE FOR INTERNATIONAL COMMUNICATION – A PREDICTION TOOL FOR FUTURE EVOLUTIONS IN GLOBAL COMMUNICATION

    Directory of Open Access Journals (Sweden)

    Gabriel-Cristian CONSTANTINESCU

    2015-12-01

    Full Text Available The paper proposes a new perspective that explains the convergence toward an increasingly smaller number of languages in communication between speakers of different native languages: the "vocation of language for international communication". For the population of a country, the exposure to its official language by implicit interaction with it makes that the majority of citizens understands this language. Correlating the populations of these countries with the spread of these languages by countries and continents generates a hierarchy of languages, at global or regional level. English has the strongest vocation of language for international communication at global level, followed by French and Spanish, while Russian and Arabic have strong vocation only at regional level. Chinese has only a medium vocation at regional level, as German, Portuguese, Italian and Dutch. 23 languages officially spoken at least 2 countries and other 88 official languages of a sole country are grouped in 5 clusters, by their vocation of language for international communication.

  16. Irish language broadcast media: the interaction of state language policy, broadcasters and their audiences

    OpenAIRE

    Ó hIfearnáin, Tadhg

    2000-01-01

    peer-reviewed The position of Irish on the airwaves now and through recent history has always been closely linked to the strength of the language in society, its position in public opinion and national language policy and the place of the state-owned broadcaster and its subsidiary channels within the broadcasting domain. Government legislation regulates the private and voluntary sectors, which may also receive indirect state subsidies for Irish language programming. It is therefore impos...

  17. Digital gaming and second language development: Japanese learners interactions in a MMORPG

    OpenAIRE

    Mark Peterson

    2011-01-01

    Massively multiplayer online role-playing games (MMORPGs) are identified as valuable arenas for language learning, as they provide access to contexts and types of interaction that are held to be beneficial in second language acquisition research. This paper will describe the development and key features of these games, and explore claims made regarding their value as environments for language learning. The discussion will then examine current research. This is followed by an analysis of t...

  18. Using English interactively: interdependence of language and intercultural communication

    OpenAIRE

    Shpresa, Qatipi

    2013-01-01

    Language learning and teaching have changed a lot in the course of time. One of the major changes has been the shift from a linguistic centered approach towards a linguistic and cultural perspective and experience in which the whole process develops in line with the understanding of the target and learners’ culture. The following article attempts to focus, describe and reflect upon the experience gained by the teachers of the English Department, Faculty of Foreign Languages in the University ...

  19. Acquiring Interactional Competence in a Study Abroad Context: Japanese Language Learners' Use of the Interactional Particle "ne"

    Science.gov (United States)

    Masuda, Kyoko

    2011-01-01

    This study examines the development of interactional competence (Hall, 1993, 1995) by English-speaking learners of Japanese as a foreign language (JFL) in a study abroad setting, as indexed by their use of the interactionally significant particle "ne." The analysis is based on a comparison of (a) 6 sets of conversations between JFL learners and…

  20. Interactive Alignment: A Teaching-Friendly View of Second Language Pronunciation Learning

    Science.gov (United States)

    Trofimovich, Pavel

    2016-01-01

    Interactive alignment is a phenomenon whereby interlocutors adopt and re-use each other's language patterns in the course of authentic interaction. According to the interactive alignment model, originally proposed by Pickering & Garrod (2004), this linguistic coordination is one way in which interlocutors achieve understanding in dialogue,…

  1. The Metapragmatic Regimentation of Heritage Language Use in Hispanic Canadian Caregiver-Child Interactions

    Science.gov (United States)

    Guardado, Martin

    2013-01-01

    This article investigates the linguistic tools employed by Hispanic Canadian families in their language socialization efforts of fostering sustained heritage language (HL) use. The article is based on data collected during a 1½-year ethnography, and focuses on the metapragmatic devices used in daily interactions. Utilizing analytic tools from the…

  2. Child Language Brokering in Linguistic Communities: Effects on Cultural Interaction, Cognition, and Literacy.

    Science.gov (United States)

    McQuillan, Jeff; Tse, Lucy

    1995-01-01

    This study examines the contexts of cultural interaction and the development of cognition and language among language minority children who brokered for their limited-English-speaking parents. Nine subjects who brokered for their parents as children were interviewed to determine the effects of brokering. (JL)

  3. Multimodal Language Learner Interactions via Desktop Videoconferencing within a Framework of Social Presence: Gaze

    Science.gov (United States)

    Satar, H. Muge

    2013-01-01

    Desktop videoconferencing (DVC) offers many opportunities for language learning through its multimodal features. However, it also brings some challenges such as gaze and mutual gaze, that is, eye-contact. This paper reports some of the findings of a PhD study investigating social presence in DVC interactions of English as a Foreign Language (EFL)…

  4. Teaching materials on language endangerment, an interactive e-learning module on the internet

    NARCIS (Netherlands)

    Odé, C.; de Graaf, T.; Ostler, N.; Salverda, R.

    2008-01-01

    In 2007, in the framework of the NWO (Netherlands Organisation for Scientific Research) Research Programme on Endangered Languages, an interactive e-learning module has been developed on language endangerment. The module for students in secondary schools (15-18 years of age) is available free of

  5. Meaning-Making in Online Language Learner Interactions via Desktop Videoconferencing

    Science.gov (United States)

    Satar, H. Müge

    2016-01-01

    Online language learning and teaching in multimodal contexts has been identified as one of the key research areas in computer-aided learning (CALL) (Lamy, 2013; White, 2014). This paper aims to explore meaning-making in online language learner interactions via desktop videoconferencing (DVC) and in doing so illustrate multimodal transcription and…

  6. The Role of Heritage Language in Social Interactions and Relationships: Reflections from a Language Minority Group.

    Science.gov (United States)

    Cho, Grace

    2000-01-01

    A study examining the role of heritage language (HL) competence in social relationships among second-generation language minorities surveyed 114 Korean Americans. HL speakers had a strong ethnic identity and a greater understanding and knowledge of cultural values, ethics, and manners than HL nonspeakers. HL competence also provided professional…

  7. How Do Raters Judge Spoken Vocabulary?

    Science.gov (United States)

    Li, Hui

    2016-01-01

    The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…

  8. Dialogue: Interactive Alignment and Its Implications for Language Learning and Language Change

    Science.gov (United States)

    Garrod, Simon; Pickering, Martin J.

    This chapter discusses language processing during conversation. In particular, it considers why taking part in a conversation is more straightforward than speaking or listening in isolation. We argue that conversation is easy because speakers and listeners automatically align with each other at different linguistic levels (e.g., sound, grammar, meaning) which leads to alignment at the level of interpretation. This alignment process is reflected in the repetitiveness of dialogue at different levels and occurs both on the basis of local mechanisms of priming and more global mechanisms of routinization. We argue that the latter process may tell us something about both acquisition of language and historical processes of language change.

  9. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  10. GAIML: A New Language for Verbal and Graphical Interaction in Chatbots

    Directory of Open Access Journals (Sweden)

    Roberto Pirrone

    2008-01-01

    Full Text Available Natural and intuitive interaction between users and complex systems is a crucial research topic in human-computer interaction. A major direction is the definition and implementation of systems with natural language understanding capabilities. The interaction in natural language is often performed by means of systems called chatbots. A chatbot is a conversational agent with a proper knowledge base able to interact with users. Chatbots appearance can be very sophisticated with 3D avatars and speech processing modules. However the interaction between the system and the user is only performed through textual areas for inputs and replies. An interaction able to add to natural language also graphical widgets could be more effective. On the other side, a graphical interaction involving also the natural language can increase the comfort of the user instead of using only graphical widgets. In many applications multi-modal communication must be preferred when the user and the system have a tight and complex interaction. Typical examples are cultural heritages applications (intelligent museum guides, picture browsing or systems providing the user with integrated information taken from different and heterogenous sources as in the case of the iGoogle™ interface. We propose to mix the two modalities (verbal and graphical to build systems with a reconfigurable interface, which is able to change with respect to the particular application context. The result of this proposal is the Graphical Artificial Intelligence Markup Language (GAIML an extension of AIML allowing merging both interaction modalities. In this context a suitable chatbot system called Graphbot is presented to support this language. With this language is possible to define personalized interface patterns that are the most suitable ones in relation to the data types exchanged between the user and the system according to the context of the dialogue.

  11. Language use in females with fragile X or Turner syndrome during brief initial social interactions.

    Science.gov (United States)

    Mazzocco, Michèle M M; Thompson, Laurie; Sudhalter, Vicki; Belser, Richard C; Lesniak-Karpiak, Katarzyna; Ross, Judith L

    2006-08-01

    Fragile X and Turner syndromes are associated with risk of atypical social function. We examined language use, including normal and atypical speech, during initial social interactions among participants engaged in a brief social role play with an unfamiliar adult. There were 27 participants with Turner syndrome, 20 with fragile X syndrome and 28 in an age-matched comparison group. Females with fragile X did not exhibit more abnormal language, but exhibited less of what is typical during initial interactions. Overall rates of dysfluencies did not differ, although females with fragile X made more phrase repetitions. Females with Turner syndrome had no language use abnormalities. Our findings suggest that language use may influence social function in females with fragile X syndrome and that such language characteristics may be observed in the context of brief encounters with an unfamiliar adult.

  12. Language discrimination by Java sparrows.

    Science.gov (United States)

    Watanabe, Shigeru; Yamamoto, Erico; Uozumi, Midori

    2006-07-01

    Java sparrows (Padda oryzivora) were trained to discriminate English from Chinese spoken by a bilingual speaker. They could learn discrimination and showed generalization to new sentences spoken by the same speaker and those spoken by a new speaker. Thus, the birds distinguished between English and Chinese. Although auditory cues for the discrimination were not specified, this is the first evidence that non-mammalian species can discriminate human languages.

  13. The Effect of Interactivity with a Music Video Game on Second Language Vocabulary Recall

    Directory of Open Access Journals (Sweden)

    Jonathan DeHaan

    2010-06-01

    Full Text Available Video games are potential sources of second language input; however, the medium’s fundamental characteristic, interactivity, has not been thoroughly examined in terms of its effect on learning outcomes. This experimental study investigated to what degree, if at all, video game interactivity would help or hinder the noticing and recall of second language vocabulary. Eighty randomly-selected Japanese university undergraduates were paired based on similar English language and game proficiencies. One subject played an English-language music video game for 20 minutes while the paired subject watched the game simultaneously on another monitor. Following gameplay, a vocabulary recall test, a cognitive load measure, an experience questionnaire, and a two-week delayed vocabulary recall test were administered. Results were analyzed using paired samples t-tests and various analyses of variance. Both the players and the watchers of the video game recalled vocabulary from the game, but the players recalled significantly less vocabulary than the watchers. This seems to be a result of the extraneous cognitive load induced by the interactivity of the game; the players perceived the game and its language to be significantly more difficult than the watchers did. Players also reported difficulty simultaneously attending to gameplay and vocabulary. Both players and watchers forgot significant amounts of vocabulary over the course of the study. We relate these findings to theories and studies of vocabulary acquisition and video game-based language learning, and then suggest implications for language teaching and learning with interactive multimedia.

  14. LANGUAGE DEVELOPMENT IN STUDY ABROAD (SA CONTEXT AND RELATIONSHIP WITH INPUT AND INTERACTION IN SLA

    Directory of Open Access Journals (Sweden)

    suryani suryani

    2015-05-01

    Full Text Available Language learning can occur anytime and anywhere (context. In term of context, language learning can take place whether at home context or at a study abroad context. This article presents the necessary background to existing literature and previous research about language development in various contexts, more specifically in a study abroad (SA context. Language learners who are studying abroad can lead to language development from a number of perspectives. Research findings revealed that language development can take a variety of forms including grammar, vocabulary, fluency, communicative skill, etc. These research findings will be reviewed in order to have a clear understanding about this issue. Then, this article continues to give a brief explanation on the role of input and interaction in SLA with some views on it.

  15. Interactions of Identity: Indochinese Refugee Youths, Language Use, and Schooling.

    Science.gov (United States)

    Kuwahara, Yuri

    A study examined the roles of language and school in the lives of a group of five Indochinese friends, aged 10-12, in the same sixth-grade class. Two were born in the United States; three were born in Thai refugee camps. The ways in which the subjects defined themselves in relation to other students, particularly other Asian students, and to each…

  16. Dynamic Adaptation in Child-Adult Language Interaction

    Science.gov (United States)

    van Dijk, Marijn; van Geert, Paul; Korecky-Kröll, Katharina; Maillochon, Isabelle; Laaha, Sabine; Dressler, Wolfgang U.; Bassano, Dominique

    2013-01-01

    When speaking to young children, adults adapt their language to that of the child. In this article, we suggest that this child-directed speech (CDS) is the result of a transactional process of dynamic adaptation between the child and the adult. The study compares developmental trajectories of three children to those of the CDS of their caregivers.…

  17. Language Impairment, Family Interaction and the Design of a Game

    Science.gov (United States)

    Noel, Guillermina

    2008-01-01

    This case study describes a user-centered design approach in the area of aphasia. Aphasia is a language impairment that can take many forms, so a particular case provides the foundation for this work. The particularities of the individual with this condition and his social context are key to developing and designing an intervention that supports…

  18. Foreign Language Anxiety Levels in Second Life Oral Interaction

    Science.gov (United States)

    Melchor-Couto, Sabela

    2017-01-01

    Virtual worlds have been described as low anxiety environments (Dickey, 2005), where students may feel "shielded" behind their avatars (Rosell-Aguilar, 2005: 432). The aim of this article is to analyse the evolution of the Foreign Language Anxiety (FLA) levels experienced by a group of participants who used the virtual world "Second…

  19. Parent-child interactions in normal and language-disordered children.

    Science.gov (United States)

    Lasky, E Z; Klopp, K

    1982-02-01

    Interactions between young children and their parents or guardians are critical factors in child language acquisition. The purpose of this study is to describe verbal and nonverbal communication patterns that occur in parent-to-child and child-to-parent interactions with normally developing children and children with language disorders. Thirty verbal and nonverbal behaviors were analyzed from videotapes of mother-child interactions. As a group, the mothers of normally developing children did not differ from the mothers of children with language disorders in the frequency of use of verbal or nonverbal interactions or in the mean length of utterance. There were no significant differences between the groups of children in frequency of use of each interaction pattern. What was different was the number of significant relationships between measures of linguistic maturity of the normally developing children and their mothers' interaction patterns that were not apparent for the language-disordered children and their mothers. Mothers' frequency of interactions as expansions, exact, reduction imitation, use of questions, answers, acknowledgements, providing information, total nonverbal behaviors, and use of nonverbal deixis all were related to some measures of the normal child's linguistic maturity. These relationships were infrequent with the language disordered group.

  20. Technology-enhanced instruction in learning world languages: The Middlebury interactive learning program

    Directory of Open Access Journals (Sweden)

    Cynthia Lake

    2015-03-01

    Full Text Available Middlebury Interactive Language (MIL programs are designed to teach world language courses using blended and online learning for students in kindergarten through grade 12. Middlebury Interactive courses start with fundamental building blocks in four key areas of world-language study: listening comprehension, speaking, reading, and writing. As students progress through the course levels, they deepen their understanding of the target language, continuing to focus on the three modes of communication: interpretive, interpersonal, and presentational. The extensive use of authentic materials (video, audio, images, or texts is intended to provide a contextualized and interactive presentation of the vocabulary and the linguistic structures. In the present paper, we describe the MIL program and the results of a mixed-methods survey and case-study evaluation of its implementation in a broad sample of schools. Technology application is examined with regard to MIL instructional strategies and the present evaluation approach relative to those employed in the literature.

  1. Early human communication helps in understanding language evolution.

    Science.gov (United States)

    Lenti Boero, Daniela

    2014-12-01

    Building a theory on extant species, as Ackermann et al. do, is a useful contribution to the field of language evolution. Here, I add another living model that might be of interest: human language ontogeny in the first year of life. A better knowledge of this phase might help in understanding two more topics among the "several building blocks of a comprehensive theory of the evolution of spoken language" indicated in their conclusion by Ackermann et al., that is, the foundation of the co-evolution of linguistic motor skills with the auditory skills underlying speech perception, and the possible phylogenetic interactions of protospeech production with referential capabilities.

  2. Digital Language Death

    Science.gov (United States)

    Kornai, András

    2013-01-01

    Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559

  3. Digital language death.

    Directory of Open Access Journals (Sweden)

    András Kornai

    Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.

  4. Multiclausal Utterances Aren't Just for Big Kids: A Framework for Analysis of Complex Syntax Production in Spoken Language of Preschool- and Early School-Age Children

    Science.gov (United States)

    Arndt, Karen Barako; Schuele, C. Melanie

    2013-01-01

    Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…

  5. Language Development in Children with Language Disorders: An Introduction to Skinner's Verbal Behavior and the Techniques for Initial Language Acquisition

    Science.gov (United States)

    Casey, Laura Baylot; Bicard, David F.

    2009-01-01

    Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…

  6. Interacting domain-specific languages with biological problem solving environments

    Science.gov (United States)

    Cickovski, Trevor M.

    Iteratively developing a biological model and verifying results with lab observations has become standard practice in computational biology. This process is currently facilitated by biological Problem Solving Environments (PSEs), multi-tiered and modular software frameworks which traditionally consist of two layers: a computational layer written in a high level language using design patterns, and a user interface layer which hides its details. Although PSEs have proven effective, they still enforce some communication overhead between biologists refining their models through repeated comparison with experimental observations in vitro or in vivo, and programmers actually implementing model extensions and modifications within the computational layer. I illustrate the use of biological Domain-Specific Languages (DSLs) as a middle-level PSE tier to ameliorate this problem by providing experimentalists with the ability to iteratively test and develop their models using a higher degree of expressive power compared to a graphical interface, while saving the requirement of general purpose programming knowledge. I develop two radically different biological DSLs: XML-based BIOLOGO will model biological morphogenesis using a cell-centered stochastic cellular automaton and translate into C++ modules for an object-oriented PSE C OMPUCELL3D, and MDLab will provide a set of high-level Python libraries for running molecular dynamics simulations, using wrapped functionality from the C++ PSE PROTOMOL. I describe each language in detail, including its its roles within the larger PSE and its expressibility in terms of representable phenomena, and a discussion of observations from users of the languages. Moreover I will use these studies to draw general conclusions about biological DSL development, including dependencies upon the goals of the corresponding PSE, strategies, and tradeoffs.

  7. Music and Language Syntax Interact in Broca's Area: An fMRI Study.

    Directory of Open Access Journals (Sweden)

    Richard Kunert

    Full Text Available Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony and language syntax interact in Broca's area in the left inferior frontal gyrus (without leading to music and language main effects. A language main effect in Broca's area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1 general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2 error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains-music and language-might draw on the same high level syntactic integration resources in Broca's area.

  8. Music and Language Syntax Interact in Broca's Area: An fMRI Study.

    Science.gov (United States)

    Kunert, Richard; Willems, Roel M; Casasanto, Daniel; Patel, Aniruddh D; Hagoort, Peter

    2015-01-01

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca's area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca's area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains-music and language-might draw on the same high level syntactic integration resources in Broca's area.

  9. The Effect of Interactivity with a Music Video Game on Second Language Vocabulary Recall

    Science.gov (United States)

    deHaan, Jonathan; Reed, W. Michael; Kuwada, Katsuko

    2010-01-01

    Video games are potential sources of second language input; however, the medium's fundamental characteristic, interactivity, has not been thoroughly examined in terms of its effect on learning outcomes. This experimental study investigated to what degree, if at all, video game interactivity would help or hinder the noticing and recall of second…

  10. Player-Game Interaction: An Ecological Analysis of Foreign Language Gameplay Activities

    Science.gov (United States)

    Ibrahim, Karim

    2018-01-01

    This article describes how the literature on game-based foreign language (FL) learning has demonstrated that player-game interactions have a strong potential for FL learning. However, little is known about the fine-grained dynamics of these interactions, or how they could facilitate FL learning. To address this gap, the researcher conducted a…

  11. Developing the Second Language Writing Process through Social Media-Based Interaction Tasks

    Science.gov (United States)

    Gómez, Julian Esteban Zapata

    2015-01-01

    This paper depicts the results from a qualitative research study focused on finding out the effect of interaction through social media on the development of second language learners' written production from a private school in Medellín, Antioquia, Colombia. The study was framed within concepts such as "social interaction," "digital…

  12. Sexual Identity as Linguistic Failure: Trajectories of Interaction in the Heteronormative Language Classroom

    Science.gov (United States)

    Liddicoat, Anthony J.

    2009-01-01

    This article examines interactions from tertiary-level foreign languages classes in which students challenge the heteronormative construction of their sexual identity. These interactions are triggered by questions that potentially reference students' real-world identities but which attribute a heteronormative identity to the questions' recipients.…

  13. Language learning, recasts, and interaction involving AAC: background and potential for intervention.

    Science.gov (United States)

    Clarke, Michael T; Soto, Gloria; Nelson, Keith

    2017-03-01

    For children with typical development, language is learned through everyday discursive interaction. Adults mediate child participation in such interactions through the deployment of a range of co-constructive strategies, including repeating, questioning, prompting, expanding, and reformulating the child's utterances. Adult reformulations of child utterances, also known as recasts, have also been shown to relate to the acquisition of linguistic structures in children with language and learning disabilities and children and adults learning a foreign language. In this paper we discuss the theoretical basis and empirical evidence for the use of different types of recasts as a major language learning catalyst, and what may account for their facilitative effects. We consider the occurrence of different types of recasts in AAC-mediated interactions and their potential for language facilitation, within the typical operational and linguistic constraints of such interactions. We also consider the benefit of explicit and corrective forms of recasts for language facilitation in conversations with children who rely on AAC. We conclude by outlining future research directions.

  14. Between Syntax and Pragmatics: The Causal Conjunction Protože in Spoken and Written Czech

    Czech Academy of Sciences Publication Activity Database

    Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra

    -, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf

  15. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers with Down Syndrome

    Science.gov (United States)

    Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth

    2015-01-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…

  16. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  17. Monitoring the Performance of Human and Automated Scores for Spoken Responses

    Science.gov (United States)

    Wang, Zhen; Zechner, Klaus; Sun, Yu

    2018-01-01

    As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…

  18. Mood contagion of robot body language in human robot interaction

    NARCIS (Netherlands)

    Xu, J.; Broekens, J.; Hindriks, K.; Neerincx, M.A.

    2015-01-01

    The aim of our work is to design bodily mood expressions of humanoid robots for interactive settings that can be recognized by users and have (positive) effects on people who interact with the robots. To this end, we develop a parameterized behavior model for humanoid robots to express mood through

  19. Interaction and common ground in dementia: Communication across linguistic and cultural diversity in a residential dementia care setting.

    Science.gov (United States)

    Strandroos, Lisa; Antelius, Eleonor

    2017-09-01

    Previous research concerning bilingual people with a dementia disease has mainly focused on the importance of sharing a spoken language with caregivers. While acknowledging this, this article addresses the multidimensional character of communication and interaction. As using spoken language is made difficult as a consequence of the dementia disease, this multidimensionality becomes particularly important. The article is based on a qualitative analysis of ethnographic fieldwork at a dementia care facility. It presents ethnographic examples of different communicative forms, with particular focus on bilingual interactions. Interaction is understood as a collective and collaborative activity. The text finds that a shared spoken language is advantageous, but is not the only source of, nor a guarantee for, creating common ground and understanding. Communicative resources other than spoken language are for example body language, embodiment, artefacts and time. Furthermore, forms of communication are not static but develop, change and are created over time. Ability to communicate is thus not something that one has or has not, but is situationally and collaboratively created. To facilitate this, time and familiarity are central resources, and the results indicate the importance of continuity in interpersonal relations.

  20. Language Brokering in Latino Families: Direct Observations of Brokering Patterns, Parent-Child Interactions, and Relationship Quality

    OpenAIRE

    Straits, Kee J. E.

    2010-01-01

    With the growing percentage of immigrant families in the USA, language transition is a common immigrant experience and can occur rapidly from generation to generation within a family. Child language brokering appears to occur within minority language families as one way of negotiating language and cultural differences; however, the phenomenon of children translating or mediating language interactions for parents has previously been hypothesized to contribute to negative outcomes for children,...

  1. Second language experience modulates functional brain network for the native language production in bimodal bilinguals.

    Science.gov (United States)

    Zou, Lijuan; Abutalebi, Jubin; Zinszer, Benjamin; Yan, Xin; Shu, Hua; Peng, Danling; Ding, Guosheng

    2012-09-01

    The functional brain network of a bilingual's first language (L1) plays a crucial role in shaping that of his or her second language (L2). However, it is less clear how L2 acquisition changes the functional network of L1 processing in bilinguals. In this study, we demonstrate that in bimodal (Chinese spoken-sign) bilinguals, the functional network supporting L1 production (spoken language) has been reorganized to accommodate the network underlying L2 production (sign language). Using functional magnetic resonance imaging (fMRI) and a picture naming task, we find greater recruitment of the right supramarginal gyrus (RSMG), the right temporal gyrus (RSTG), and the right superior occipital gyrus (RSOG) for bilingual speakers versus monolingual speakers during L1 production. In addition, our second experiment reveals that these regions reflect either automatic activation of L2 (RSOG) or extra cognitive coordination (RSMG and RSTG) between both languages during L1 production. The functional connectivity between these regions, as well as between other regions that are L1- or L2-specific, is enhanced during L1 production in bimodal bilinguals as compared to their monolingual peers. These findings suggest that L1 production in bimodal bilinguals involves an interaction between L1 and L2, supporting the claim that learning a second language does, in fact, change the functional brain network of the first language. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Grammar Is a System That Characterizes Talk in Interaction.

    Science.gov (United States)

    Ginzburg, Jonathan; Poesio, Massimo

    2016-01-01

    Much of contemporary mainstream formal grammar theory is unable to provide analyses for language as it occurs in actual spoken interaction. Its analyses are developed for a cleaned up version of language which omits the disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language. Using evidence from linguistics, conversation analysis, multimodal communication, psychology, language acquisition, and neuroscience, we show these aspects of language use are rule governed in much the same way as phenomena captured by conventional grammars. Furthermore, we argue that over the past few years some of the tools required to provide a precise characterizations of such phenomena have begun to emerge in theoretical and computational linguistics; hence, there is no reason for treating them as "second class citizens" other than pre-theoretical assumptions about what should fall under the purview of grammar. Finally, we suggest that grammar formalisms covering such phenomena would provide a better foundation not just for linguistic analysis of face-to-face interaction, but also for sister disciplines, such as research on spoken dialogue systems and/or psychological work on language acquisition.

  3. Complex sentences in sign languages: Modality, typology, discourse

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Pfau, R.; Steinbach, M.; Herrmann, A.

    2016-01-01

    Sign language grammars, just like spoken language grammars, generally provide various means to generate different kinds of complex syntactic structures including subordination of complement clauses, adverbial clauses, or relative clauses. Studies on various sign languages have revealed that sign

  4. Regional Languages on Wikipedia. Venetian Wikipedia’s user interaction over time

    OpenAIRE

    Zelenkauskaite, Asta; Massa, Paolo

    2012-01-01

    Given that little is known about regional language user interaction practices on Wikipedia, this study analyzed content creation process, user social interaction and exchanged content over the course of the existence of Venetian Wikipedia. Content of and user interactions over time on Venetian Wikipedia exhibit practices shared within larger Wikipedia communities and display behaviors that are pertinent to this specific community. Shared practices with other Wikipedias (eg. English Wikiped...

  5. ATTILA 2 S. A technical and interactive test language for architecture allowing simultaneity

    International Nuclear Information System (INIS)

    Batllo, M.

    1980-01-01

    The name ATTILA 2 S is inspired from ATLAS, test language adopted by the Department of Defence of America (D.O.D.) but cannot be implemented on our installation. ATTILA 2 S is principally characterized by: its technical vocabulary (P.O.L.), its interactivity, its simultaneity with main job (Multiprogramming and Multiprocessing allowed by multiprocessors architecture. This language has been developed for the Paris C.R.T. system (Photographies analysis system) on Control Data Cyber 72 computer [fr

  6. Reflection of society and language interaction in Internet-discourse

    Directory of Open Access Journals (Sweden)

    Nefedov Igor Vladislavovich

    2015-09-01

    Full Text Available The article attempts to show the conditioning by extralinguistic factors of the active usage in the online discourse the lexeme maidan, related to it words from the viewpoint of word-building and occasional paronomasia with emotionally-estimated meaning. The lexeme maidan in recent years has become one of the most important discursive phenomenon within new modern-language situation. Events of the end of 2013- beginning of 2014 led to a new political confrontation in Ukraine and as a consequence - to activization of the word maidan. Analysis of linguistic resources, represented in online discourse, suggests that the semantic net of the lexeme has changed considerably: there are new, contextually preconditioned lexical meanings, some of the old meanings were on the periphery, some -got a very narrow scope of usage. In online discourse, language picture of the world is represented by a large number of new words and the intensification of the use of words, long-established in the lexical system. Many of these words have negative semantics and colloquial pejorative and derogatory overtones. This is due to extralinguistic factors - political events in the life of Ukrainian society at the present stage.

  7. Nuffield Early Language Intervention: Evaluation Report and Executive Summary

    Science.gov (United States)

    Sibieta, Luke; Kotecha, Mehul; Skipp, Amy

    2016-01-01

    The Nuffield Early Language Intervention is designed to improve the spoken language ability of children during the transition from nursery to primary school. It is targeted at children with relatively poor spoken language skills. Three sessions per week are delivered to groups of two to four children starting in the final term of nursery and…

  8. Music and Language Syntax Interact in Broca’s Area: An fMRI Study

    Science.gov (United States)

    Kunert, Richard; Willems, Roel M.; Casasanto, Daniel; Patel, Aniruddh D.; Hagoort, Peter

    2015-01-01

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area. PMID:26536026

  9. The languages of the world

    National Research Council Canada - National Science Library

    Katzner, Kenneth

    2002-01-01

    ... on populations and the numbers of people speaking each language. Features include: * * * * * nearly 600 languages identified as to where they are spoken and the family to which they belong over 200 languages individually described, with sample passages and English translation fascinating insights into the history and development of individual languages a...

  10. Information Structure in Sign Languages

    NARCIS (Netherlands)

    Kimmelman, V.; Pfau, R.; Féry, C.; Ishihara, S.

    2016-01-01

    This chapter demonstrates that the Information Structure notions Topic and Focus are relevant for sign languages, just as they are for spoken languages. Data from various sign languages reveal that, across sign languages, Information Structure is encoded by syntactic and prosodic strategies, often

  11. Becoming "Spanish Learners": Identity and Interaction among Multilingual Children in a Spanish-English Dual Language Classroom

    Science.gov (United States)

    Martínez, Ramón Antonio; Durán, Leah; Hikida, Michiko

    2017-01-01

    This article explores the interactional co-construction of identities among two first-grade students learning Spanish as a third language in a Spanish-English dual language classroom. Drawing on ethnographic and interactional data, the article focuses on a single interaction between these two "Spanish learners" and two of their…

  12. Communicative Aspects of Definitions in Classroom Interaction: Learning to Define in Class for First and Second Language Learners

    Science.gov (United States)

    Temmerman, Martina

    2009-01-01

    This paper studies the interactive structure and the interactive meaning of definitions in primary school classroom interaction. The classes that were chosen are classes which consisted solely or for a large part of second language learners, as definitions might have a special importance for them in their second language acquisition. Three…

  13. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.

    2014-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  14. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.

    2015-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  15. Interactive Digital Kitchen: The Impact on Language Learning

    Science.gov (United States)

    Ishak, Nor Fadzlinda; Seedhouse, Paul

    2012-01-01

    This study aims to investigate the usability of a newly developed technology--the Digital Kitchen--as compared to a normal everyday kitchen to teach English vocabulary. This interactive kitchen which was first developed to help people with dementia is equipped with sensors and different wireless communication technologies which allows it to give…

  16. Interactive Media to Support Language Acquisition for Deaf Students

    Science.gov (United States)

    Parton, Becky Sue; Hancock, Robert; Crain-Dorough, Mindy; Oescher, Jeff

    2009-01-01

    Tangible computing combines digital feedback with physical interactions - an important link for young children. Through the use of Radio Frequency Identification (RFID) technology, a real-world object (i.e. a chair) or a symbolic toy (i.e. a stuffed bear) can be tagged so that students can activate multimedia learning modules automatically. The…

  17. Parent-Child Interaction Therapy (PCIT) in school-aged children with specific language impairment.

    Science.gov (United States)

    Allen, Jessica; Marshall, Chloë R

    2011-01-01

    Parents play a critical role in their child's language development. Therefore, advising parents of a child with language difficulties on how to facilitate their child's language might benefit the child. Parent-Child Interaction Therapy (PCIT) has been developed specifically for this purpose. In PCIT, the speech-and-language therapist (SLT) works collaboratively with parents, altering interaction styles to make interaction more appropriate to their child's level of communicative needs. This study investigates the effectiveness of PCIT in 8-10-year-old children with specific language impairment (SLI) in the expressive domain. It aimed to identify whether PCIT had any significant impact on the following communication parameters of the child: verbal initiations, verbal and non-verbal responses, mean length of utterance (MLU), and proportion of child-to-parent utterances. Sixteen children with SLI and their parents were randomly assigned to two groups: treated or delayed treatment (control). The treated group took part in PCIT over a 4-week block, and then returned to the clinic for a final session after a 6-week consolidation period with no input from the therapist. The treated and control group were assessed in terms of the different communication parameters at three time points: pre-therapy, post-therapy (after the 4-week block) and at the final session (after the consolidation period), through video analysis. It was hypothesized that all communication parameters would significantly increase in the treated group over time and that no significant differences would be found in the control group. All the children in the treated group made language gains during spontaneous interactions with their parents. In comparison with the control group, PCIT had a positive effect on three of the five communication parameters: verbal initiations, MLU and the proportion of child-to-parent utterances. There was a marginal effect on verbal responses, and a trend towards such an effect

  18. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  19. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2016-07-01

    Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  20. Language contact phenomena in the language use of speakers of German descent and the significance of their language attitudes

    Directory of Open Access Journals (Sweden)

    Ries, Veronika

    2014-03-01

    Full Text Available Within the scope of my investigation on language use and language attitudes of People of German Descent from the USSR, I find almost regular different language contact phenomena, such as viel bliny habn=wir gbackt (engl.: 'we cooked lots of pancakes' (cf. Ries 2011. The aim of analysis is to examine both language use with regard to different forms of language contact and the language attitudes of the observed speakers. To be able to analyse both of these aspects and synthesize them, different types of data are required. The research is based on the following two data types: everyday conversations and interviews. In addition, the individual speakers' biography is a key part of the analysis, because it allows one to draw conclusions about language attitudes and use. This qualitative research is based on morpho-syntactic and interactional linguistic analysis of authentic spoken data. The data arise from a corpus compiled and edited by myself. My being a member of the examined group allowed me to build up an authentic corpus. The natural language use is analysed from the perspective of different language contact phenomena and potential functions of language alternations. One central issue is: How do speakers use the languages available to them, German and Russian? Structural characteristics such as code switching and discursive motives for these phenomena are discussed as results, together with the socio-cultural background of the individual speaker. Within the scope of this article I present exemplarily the data and results of one speaker.

  1. Digital gaming and second language development: Japanese learners interactions in a MMORPG

    Directory of Open Access Journals (Sweden)

    Mark Peterson

    2011-04-01

    Full Text Available Massively multiplayer online role-playing games (MMORPGs are identified as valuable arenas for language learning, as they provide access to contexts and types of interaction that are held to be beneficial in second language acquisition research. This paper will describe the development and key features of these games, and explore claims made regarding their value as environments for language learning. The discussion will then examine current research. This is followed by an analysis of the findings from an experimental qualitative study that investigates the interaction and attitudes of Japanese English as a foreign language learners who participated in MMORPG-based game play. The analysis draws attention to the challenging nature of the communication environment and the need for learner training. The findings indicate that system management issues, proficiency levels, the operation of affective factors, and prior gaming experiences appeared to influence participation. The data shows that for the intermediate learners who were novice users, the interplay of these factors appeared to restrict opportunities to engage in beneficial forms of interaction. In a positive finding, it was found that the intermediate and advanced level participants effectively utilized both adaptive and transfer discourse management strategies. Analysis reveals they took the lead in managing their discourse, and actively engaged in collaborative social interaction involving dialog in the target language. Participant feedback suggests that real time computer-based nature of the interaction provided benefits. These include access to an engaging social context, enjoyment, exposure to new vocabulary, reduced anxiety, and valuable opportunities to practice using a foreign language. This paper concludes by identifying areas of interest for future research.

  2. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  3. Language

    DEFF Research Database (Denmark)

    Sanden, Guro Refsum

    2016-01-01

    Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...

  4. Using a Humanoid Robot to Develop a Dialogue-Based Interactive Learning Environment for Elementary Foreign Language Classrooms

    Science.gov (United States)

    Chang, Chih-Wei; Chen, Gwo-Dong

    2010-01-01

    Elementary school is the critical stage during which the development of listening comprehension and oral abilities in language acquisition occur, especially with a foreign language. However, the current foreign language instructors often adopt one-way teaching, and the learning environment lacks any interactive instructional media with which to…

  5. Language-Building Activities and Interaction Variations with Mixed-Ability ESL University Learners in a Content-Based Course

    Science.gov (United States)

    Serna Dimas, Héctor Manuel; Ruíz Castellanos, Erika

    2014-01-01

    The preparation of both language-building activities and a variety of teacher/student interaction patterns increase both oral language participation and content learning in a course of manual therapy with mixed-language ability students. In this article, the researchers describe their collaboration in a content-based course in English with English…

  6. The Practical Side of Working with Parent-Child Interaction Therapy with Preschool Children with Language Impairments

    Science.gov (United States)

    Klatte, Inge S.; Roulstone, Sue

    2016-01-01

    A common early intervention approach for preschool children with language problems is parent-child interaction therapy (PCIT). PCIT has positive effects for children with expressive language problems. It appears that speech and language therapists (SLTs) conduct this therapy in many different ways. This might be because of the variety of…

  7. INDIVIDUAL ACCOUNTABILITY IN COOPERATIVE LEARNING: MORE OPPORTUNITIES TO PRODUCE SPOKEN ENGLISH

    Directory of Open Access Journals (Sweden)

    Puji Astuti

    2017-05-01

    Full Text Available The contribution of cooperative learning (CL in promoting second and foreign language learning has been widely acknowledged. Little scholarly attention, however, has been given to revealing how this teaching method works and promotes learners’ improved communicative competence. This qualitative case study explores the important role that individual accountability in CL plays in giving English as a Foreign Language (EFL learners in Indonesia the opportunity to use the target language of English. While individual accountability is a principle of and one of the activities in CL, it is currently under studied, thus little is known about how it enhances EFL learning. This study aims to address this gap by conducting a constructivist grounded theory analysis on participant observation, in-depth interview, and document analysis data drawn from two secondary school EFL teachers, 77 students in the observed classrooms, and four focal students. The analysis shows that through individual accountability in CL, the EFL learners had opportunities to use the target language, which may have contributed to the attainment of communicative competence—the goal of the EFL instruction. More specifically, compared to the use of conventional group work in the observed classrooms, through the activities of individual accountability in CL, i.e., performances and peer interaction, the EFL learners had more opportunities to use spoken English. The present study recommends that teachers, especially those new to CL, follow the preset procedure of selected CL instructional strategies or structures in order to recognize the activities within individual accountability in CL and understand how these activities benefit students.

  8. Guest Comment: Universal Language Requirement.

    Science.gov (United States)

    Sherwood, Bruce Arne

    1979-01-01

    Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)

  9. Using language for social interaction: Communication mechanisms promote recovery from chronic non-fluent aphasia.

    Science.gov (United States)

    Stahl, Benjamin; Mohr, Bettina; Dreyer, Felix R; Lucchese, Guglielmo; Pulvermüller, Friedemann

    2016-12-01

    Clinical research highlights the importance of massed practice in the rehabilitation of chronic post-stroke aphasia. However, while necessary, massed practice may not be sufficient for ensuring progress in speech-language therapy. Motivated by recent advances in neuroscience, it has been claimed that using language as a tool for communication and social interaction leads to synergistic effects in left perisylvian eloquent areas. Here, we conducted a crossover randomized controlled trial to determine the influence of communicative language function on the outcome of intensive aphasia therapy. Eighteen individuals with left-hemisphere lesions and chronic non-fluent aphasia each received two types of training in counterbalanced order: (i) Intensive Language-Action Therapy (ILAT, an extended form of Constraint-Induced Aphasia Therapy) embedding verbal utterances in the context of communication and social interaction, and (ii) Naming Therapy focusing on speech production per se. Both types of training were delivered with the same high intensity (3.5 h per session) and duration (six consecutive working days), with therapy materials and number of utterances matched between treatment groups. A standardized aphasia test battery revealed significantly improved language performance with ILAT, independent of when this method was administered. In contrast, Naming Therapy tended to benefit language performance only when given at the onset of the treatment, but not when applied after previous intensive training. The current results challenge the notion that massed practice alone promotes recovery from chronic post-stroke aphasia. Instead, our results demonstrate that using language for communication and social interaction increases the efficacy of intensive aphasia therapy. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. UNDERSTANDING TENOR IN SPOKEN TEXTS IN YEAR XII ENGLISH TEXTBOOK TO IMPROVE THE APPROPRIACY OF THE TEXTS

    Directory of Open Access Journals (Sweden)

    Noeris Meristiani

    2011-07-01

    Full Text Available ABSTRACT: The goal of English Language Teaching is communicative competence. To reach this goal students should be supplied with good model texts. These texts should consider the appropriacy of language use. By analyzing the context of situation which is focused on tenor the meanings constructed to build the relationships among the interactants in spoken texts can be unfolded. This study aims at investigating the interpersonal relations (tenor of the interactants in the conversation texts as well as the appropriacy of their realization in the given contexts. The study was conducted under discourse analysis by applying a descriptive qualitative method. There were eight conversation texts which function as examples in five chapters of a textbook. The data were analyzed by using lexicogrammatical analysis, described, and interpreted contextually. Then, the realization of the tenor of the texts was further analyzed in terms of appropriacy to suggest improvement. The results of the study show that the tenor indicates relationships between friend-friend, student-student, questioners-respondents, mother-son, and teacher-student; the power is equal and unequal; the social distances show frequent contact, relatively frequent contact, relatively low contact, high and low affective involvement, using informal, relatively informal, relatively formal, and formal language. There are also some indications of inappropriacy of tenor realization in all texts. It should be improved in the use of degree of formality, the realization of societal roles, status, and affective involvement. Keywords: context of situation, tenor, appropriacy.

  11. Emotional and interactional prosody across animal communication systems: A comparative approach to the emergence of language

    Directory of Open Access Journals (Sweden)

    Piera Filippi

    2016-09-01

    Full Text Available Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP paved the way for the evolution of linguistic prosody - and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: i empirical studies on the adaptive value of EIP in nonhuman primates, mammals, songbirds, anurans and insects; ii the beneficial effects of EIP in scaffolding language learning and social development in human infants; iii the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.

  12. Language processes and the interaction in the fraction division resolution by high school students

    Directory of Open Access Journals (Sweden)

    María Helena PALMA DE OLIVEIRA

    2017-12-01

    Full Text Available This study describes and discusses the language processes and interactions that occur with a group of students in high school of a public school in the city of São Paulo during a activity resolution of fractions division. A historic-cultural perspective considers the mediation through the language and the social interaction that happened through the dialogs during the resolution as constitutive of the mathematical reasoning. The language processes expressed on the dialogues between the participants showed the difficulties marked by repetitive and mechanical explanations, and the poor mathematical background. Nevertheless, and most importantly, created possibilities, in different levels for the participants, of movement and chance in comparison to the mathematical comprehension of fractions division content, required to the resolution of potentiation of rational base and negative exponent.

  13. ESL students learning biology: The role of language and social interactions

    Science.gov (United States)

    Jaipal, Kamini

    This study explored three aspects related to ESL students in a mainstream grade 11 biology classroom: (1) the nature of students' participation in classroom activities, (2) the factors that enhanced or constrained ESL students' engagement in social interactions, and (3) the role of language in the learning of science. Ten ESL students were observed over an eight-month period in this biology classroom. Data were collected using qualitative research methods such as participant observation, audio-recordings of lessons, field notes, semi-structured interviews, short lesson recall interviews and students' written work. The study was framed within sociocultural perspectives, particularly the social constructivist perspectives of Vygotsky (1962, 1978) and Wertsch (1991). Data were analysed with respect to the three research aspects. Firstly, the findings showed that ESL students' preferred and exhibited a variety of participation practices that ranged from personal-individual to socio-interactive in nature. Both personal-individual and socio-interactive practices appeared to support science and language learning. Secondly, the findings indicated that ESL students' engagement in classroom social interactions was most likely influenced by the complex interactions between a number of competing factors at the individual, interpersonal and community/cultural levels (Rogoff, Radziszewska, & Masiello, 1995). In this study, six factors that appeared to enhance or constrain ESL students' engagement in classroom social interactions were identified. These factors were socio-cultural factors, prior classroom practice, teaching practices, affective factors, English language proficiency, and participation in the research project. Thirdly, the findings indicated that language played a significant mediational role in ESL students' learning of science. The data revealed that the learning of science terms and concepts can be explained by a functional model of language that includes: (1

  14. The use of videoconferencing to support multimodal interaction in an online language classroom

    OpenAIRE

    Hampel, Regine; Stickler, Ursula

    2012-01-01

    The introduction of virtual learning environments has made new tools available that have the potential to support learner communication and interaction, thus aiding second language acquisition both from a psycholinguistic and a sociocultural point of view. This article focuses on the use of videoconferencing in the context of a larger exploratory study to find out how interaction was influenced by the affordances of the environment. Taking a mainly qualitative approach, the authors analysed t...

  15. Moving conceptualizations of language and literacy in SLA

    DEFF Research Database (Denmark)

    Laursen, Helle Pia

    and conceptualizations of language and literacy in research on (second) language acquisition. When examining children’s first language acquisition, spoken language has been the primary concern in scholarship: a child acquires oral language first and written language follows later, i.e. language precedes literacy....... On the other hand, many second or foreign language learners learn mostly through written language or learn spoken and written language at the same time. Thus the connections between spoken and written (and visual) modalities, i.e. between language and literacy, are complex in research on language acquisition......Moving conceptualizations of language and literacy in SLA In this colloquium, we aim to problematize the concepts of language and literacy in the field that is termed “second language” research and seek ways to critically connect the terms. When considering current day language use for example...

  16. Fostering Reflective Writing and Interactive Exchange through Blogging in an Advanced Language Course

    Science.gov (United States)

    Lee, Lina

    2010-01-01

    Blog technology is a potential medium for encouraging reflective writing through self-expression and interactive exchange through social networking. This paper reports on a study using blogs as out-of-class assignments for the development of learners' language competence. The study involved seventeen university students at advanced level who kept…

  17. Do maternal interaction and early language predict phonological awareness in 3- to 4-year-olds?

    NARCIS (Netherlands)

    Silvén, M.; Niemi, P.; Voeten, M.J.M.

    2002-01-01

    The present study reports longitudinal data on how phonological awareness is affected by mother-child interaction and the child's language development. Sixty-six Finnish children were videotaped at 12 and 24 months of age with their mother, during joint play episodes, to assess maternal sensitivity

  18. Developing Interactional Competence by Using TV Series in "English as an Additional Language" Classrooms

    Science.gov (United States)

    Sert, Olcay

    2009-01-01

    This paper uses a combined methodology to analyse the conversations in supplementary audio-visual materials to be implemented in language teaching classrooms in order to enhance the Interactional Competence (IC) of the learners. Based on a corpus of 90.000 words (Coupling Corpus), the author tries to reveal the potentials of using TV series in …

  19. Enhancing Children's Language Learning and Cognition Experience through Interactive Kinetic Typography

    Science.gov (United States)

    Lau, Newman M. L.; Chu, Veni H. T.

    2015-01-01

    This research aimed at investigating the method of using kinetic typography and interactive approach to conduct a design experiment for children to learn vocabularies. Typography is the unique art and technique of arranging type in order to make language visible. By adding animated movement to characters, kinetic typography expresses language…

  20. Music and Sign Language to Promote Infant and Toddler Communication and Enhance Parent-Child Interaction

    Science.gov (United States)

    Colwell, Cynthia; Memmott, Jenny; Meeker-Miller, Anne

    2014-01-01

    The purpose of this study was to determine the efficacy of using music and/or sign language to promote early communication in infants and toddlers (6-20 months) and to enhance parent-child interactions. Three groups used for this study were pairs of participants (care-giver(s) and child) assigned to each group: 1) Music Alone 2) Sign Language…

  1. Language Learner/Native Speaker Interactions: Exploring Adaptability in Intercultural Encounters

    Science.gov (United States)

    Chamberlin-Quinlisk, Carla

    2010-01-01

    Diversity and intercultural awareness initiatives are increasingly common at institutions of higher education in the USA. Although students recognize and appreciate the diversity of their surroundings, studies show that intercultural interactions at the social level are lacking. This study focuses on how English language learners, multilingual…

  2. Language of the Legal Process: An Analysis of Interactions in the "Syariah" Court

    Science.gov (United States)

    Hashim, Azirah; Hassan, Norizah

    2011-01-01

    This study examines interactions from trials in the Syariah court in Malaysia. It focuses on the types of questioning, the choice of language and the linguistic resources employed in this particular context. In the discourse of law, questioning has been a prominent concern particularly in cross-examination and can be considered one of the key…

  3. Instructional Interaction Development and Its Effects in Online Foreign Language Learning

    Science.gov (United States)

    Zhao, Rong

    2014-01-01

    This paper introduced the features of scaffolding to the development of instructional interaction in online foreign language learning, and testified their effects on learners' perceived usefulness, perceived ease of use, sense of community, and continuance intention by the integration of the Technology-Acceptance Model and the Organizational…

  4. Mutually Beneficial Foreign Language Learning: Creating Meaningful Interactions through Video-Synchronous Computer-Mediated Communication

    Science.gov (United States)

    Kato, Fumie; Spring, Ryan; Mori, Chikako

    2016-01-01

    Providing learners of a foreign language with meaningful opportunities for interactions, specifically with native speakers, is especially challenging for instructors. One way to overcome this obstacle is through video-synchronous computer-mediated communication tools such as Skype software. This study reports quantitative and qualitative data from…

  5. Virtual Interaction through Video-Web Communication: A Step towards Enriching and Internationalizing Language Learning Programs

    Science.gov (United States)

    Jauregi, Kristi; Banados, Emerita

    2008-01-01

    This paper describes an intercontinental project with the use of interactive tools, both synchronous and asynchronous, which was set up to internationalize academic learning of Spanish language and culture. The objective of this case study was to investigate whether video-web communication tools can contribute to enriching the quality of foreign…

  6. Facilitating Storybook Interactions between Mothers and Their Preschoolers with Language Impairment.

    Science.gov (United States)

    Crowe, Linda K.; Norris, Janet A.; Hoffman, Paul R.

    2000-01-01

    Three children with language impairment (ages 38 to 41 months) and their mothers participated in a study evaluating a storybook reading process for facilitating mother-child interactions. The complete reading cycle (CRC) involved: (1) attentional vocative, (2) query, (3) response, and (4) feedback. Results indicated changes in mothers' storybook…

  7. Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/

    Directory of Open Access Journals (Sweden)

    Socorro Cláudia Tavares de Sousa

    2009-01-01

    Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.

  8. Speech perception and reading: two parallel modes of understanding language and implications for acquiring literacy naturally.

    Science.gov (United States)

    Massaro, Dominic W

    2012-01-01

    I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.

  9. Book review. Neurolinguistics. An Introduction to Spoken Language Processing and its Disorders, John Ingram. Cambridge University Press, Cambridge (Cambridge Textbooks in Linguistics) (2007). xxi + 420 pp., ISBN 978-0-521-79640-8 (pb)

    OpenAIRE

    Schiller, N.O.

    2009-01-01

    The present textbook is one of the few recent textbooks in the area of neurolinguistics and will be welcomed by teachers of neurolinguistic courses as well as researchers interested in the topic. Neurolinguistics is a huge area, and the boundaries between psycho- and neurolinguistics are not sharp. Often the term neurolinguistics is used to refer to research involving neuropsychological patients suffering from some sort of language disorder or impairment. Also, the term neuro- rather than psy...

  10. Quality of caregiver-child play interactions with toddlers born preterm and full term: Antecedents and language outcome.

    Science.gov (United States)

    Loi, Elizabeth C; Vaca, Kelsey E C; Ashland, Melanie D; Marchman, Virginia A; Fernald, Anne; Feldman, Heidi M

    2017-12-01

    Preterm birth may leave long-term effects on the interactions between caregivers and children. Language skills are sensitive to the quality of caregiver-child interactions. Compare the quality of caregiver-child play interactions in toddlers born preterm (PT) and full term (FT) at age 22months (corrected for degree of prematurity) and evaluate the degree of association between caregiver-child interactions, antecedent demographic and language factors, and subsequent language skill. A longitudinal descriptive cohort study. 39 PT and 39 FT toddlers individually matched on sex and socioeconomic status (SES). The outcome measures were dimensions of caregiver-child interactions, rated from a videotaped play session at age 22months in relation to receptive language assessments at ages 18 and 36months. Caregiver intrusiveness was greater in the PT than FT group. A composite score of child interactional behaviors was associated with a composite score of caregiver interactional behaviors. The caregiver composite measure was associated with later receptive vocabulary at 36months. PT-FT group membership did not moderate the association between caregiver interactional behavior and later receptive vocabulary. The quality of caregiver interactional behavior had similar associations with concurrent child interactional behavior and subsequent language outcome in the PT and FT groups. Greater caregiver sensitivity/responsiveness, verbal elaboration, and less intrusiveness support receptive language development in typically developing toddlers and toddlers at risk for language difficulty. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Remote Data Exploration with the Interactive Data Language (IDL)

    Science.gov (United States)

    Galloy, Michael

    2013-01-01

    A difficulty for many NASA researchers is that often the data to analyze is located remotely from the scientist and the data is too large to transfer for local analysis. Researchers have developed the Data Access Protocol (DAP) for accessing remote data. Presently one can use DAP from within IDL, but the IDL-DAP interface is both limited and cumbersome. A more powerful and user-friendly interface to DAP for IDL has been developed. Users are able to browse remote data sets graphically, select partial data to retrieve, import that data and make customized plots, and have an interactive IDL command line session simultaneous with the remote visualization. All of these IDL-DAP tools are usable easily and seamlessly for any IDL user. IDL and DAP are both widely used in science, but were not easily used together. The IDL DAP bindings were incomplete and had numerous bugs that prevented their serious use. For example, the existing bindings did not read DAP Grid data, which is the organization of nearly all NASA datasets currently served via DAP. This project uniquely provides a fully featured, user-friendly interface to DAP from IDL, both from the command line and a GUI application. The DAP Explorer GUI application makes browsing a dataset more user-friendly, while also providing the capability to run user-defined functions on specified data. Methods for running remote functions on the DAP server were investigated, and a technique for accomplishing this task was decided upon.

  12. Concurrent word generation and motor performance: further evidence for language-motor interaction.

    Directory of Open Access Journals (Sweden)

    Amy D Rodriguez

    Full Text Available Embodied/modality-specific theories of semantic memory propose that sensorimotor representations play an important role in perception and action. A large body of evidence supports the notion that concepts involving human motor action (i.e., semantic-motor representations are processed in both language and motor regions of the brain. However, most studies have focused on perceptual tasks, leaving unanswered questions about language-motor interaction during production tasks. Thus, we investigated the effects of shared semantic-motor representations on concurrent language and motor production tasks in healthy young adults, manipulating the semantic task (motor-related vs. nonmotor-related words and the motor task (i.e., standing still and finger-tapping. In Experiment 1 (n = 20, we demonstrated that motor-related word generation was sufficient to affect postural control. In Experiment 2 (n = 40, we demonstrated that motor-related word generation was sufficient to facilitate word generation and finger tapping. We conclude that engaging semantic-motor representations can have a reciprocal influence on motor and language production. Our study provides additional support for functional language-motor interaction, as well as embodied/modality-specific theories.

  13. Interactions between Bilingual Effects and Language Impairment: Exploring Grammatical Markers in Spanish-Speaking Bilingual Children

    Science.gov (United States)

    Castilla-Earls, Anny P.; Restrepo, María Adelaida; Perez-Leroux, Ana Teresa; Gray, Shelley; Holmes, Paul; Gail, Daniel; Chen, Ziqiang

    2015-01-01

    This study examines the interaction between language impairment and different levels of bilingual proficiency. Specifically, we explore the potential of articles and direct object pronouns as clinical markers of primary language impairment (PLI) in bilingual Spanish-speaking children. The study compared children with PLI and typically developing children (TD) matched on age, English language proficiency, and mother’s education level. Two types of bilinguals were targeted: Spanish-dominant children with intermediate English proficiency (asymmetrical bilinguals, AsyB), and near-balanced bilinguals (BIL). We measured children’s accuracy in the use of direct object pronouns and articles with an elicited language task. Results from this preliminary study suggest language proficiency affects the patterns of use of direct object pronouns and articles. Across language proficiency groups, we find marked differences between TD and PLI, in the use of both direct object pronouns and articles. However, the magnitude of the difference diminishes in balanced bilinguals. Articles appear more stable in these bilinguals and therefore, seem to have a greater potential to discriminate between TD bilinguals from those with PLI. Future studies using discriminant analyses are needed to assess the clinical impact of these findings. PMID:27570320

  14. Conceptual Framework: Development of Interactive Reading Malay Language Learning System (I-ReaMaLLS

    Directory of Open Access Journals (Sweden)

    Ismail Nurulisma

    2018-01-01

    Full Text Available Reading is very important to access knowledge. Reading skills starts during preschool level no matter of the types of languages. At present, there are many preschool children who are still unable to recognize letters or even words. This leads to the difficulties in reading. Therefore, there is a need of intervention in reading to overcome such problems. Thus, technologies were adapted in enhancing learning skills, especially in learning to read among the preschool children. Phonological is one of the factors to be considered to ensure a smooth of transition into reading. Phonological concept enables the first learner to easily learn reading such to learn reading Malay language. The medium of learning to read Malay language can be assisted via the supportive of multimedia technology to enhance the preschool children learning. Thus, an interactive system is proposed via a development of interactive reading Malay language learning system, which is called as I-ReaMaLLS. As a part of the development of I-ReaMaLLS, this paper focus on the development of conceptual framework in developing interactive reading Malay language learning system (I-ReaMaLLS. I-ReaMaLLS is voice based system that facilitates the preschool learner in learning reading Malay language. The conceptual framework of developing I-ReaMaLLS is conceptualized based on the initial study conducted via methods of literature review and observation with the preschool children, aged 5 – 6 years. As the result of the initial study, research objectives have been affirmed that finally contributes to the design of conceptual framework for the development of I-ReaMaLLS.

  15. FORMATION OF STUDENTS’ FOREIGN LANGUAGE COMPETENCE IN THE INFORMATIONAL FIELD OF CROSS CULTURAL INTERACTION

    Directory of Open Access Journals (Sweden)

    Vitaly Vyacheslavovich Tomin

    2015-09-01

    Full Text Available Knowledge of foreign languages is becoming an integral feature of competitive persona-lity, ability to engage in cross-cultural communication and productive cross-cultural inte-raction, characterized by an adequate degree of tolerance and multi-ethnic competence, the ability for cross-cultural adaptation, critical thinking and creativity. However, the concept of foreign language competence has so far no clear, unambiguous definitions, thereby indicating the complexity and diversity of the phenomenon, which is an integrative, practice-oriented outcome of the wish and ability for intercultural communication. There have been mentioned a variety of requirements, conditions, principles, objectives, means and forms of foreign language competence forming, among which special attention is paid to non-traditional forms of practical training and information field in a cross-cultural interaction. There have been explained the feasibility of their application, which allows solving a complex of series of educational and teaching tasks more efficiently. There have been clarified the term «information field» in cross-cultural interaction, which is a cross-section of internally inherent in every individual «sections» of knowledge, skills, and experience, arising in certain given educational frameworks and forming a communication channel. The resultative indicators of the formation of foreign language competence and ways to improve its effectiveness are presented.

  16. Classifiers and Plurality: evidence from a deictic classifier language

    Directory of Open Access Journals (Sweden)

    Filomena Sandalo

    2016-12-01

    Full Text Available This paper investigates the semantic contribution of plural morphology and its interaction with classifiers in Kadiwéu. We show that Kadiwéu, a Waikurúan language spoken in South America, is a classifier language similar to Chinese but classifiers are an obligatory ingredient of all determiner-like elements, such as quantifiers, numerals, and wh-words for arguments. What all elements with classifiers have in common is that they contribute an atomized/individualized interpretation of the NP. Furthermore, this paper revisits the relationship between classifiers and number marking and challenges the common assumption that classifiers and plurals are mutually exclusive.

  17. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  18. Analyzing the Influence of Language Proficiency on Interactive Book Search Behavior

    DEFF Research Database (Denmark)

    Bogers, Toine; Gäde, Maria; Hall, Mark M.

    2016-01-01

    English content still dominates in many online domains and information systems, despite native English speakers being a minority of its users. However, we know little about how language proficiency influences search behavior in these systems. In this paper, we describe preliminary results from...... an interactive IR experiment with book search behavior and examine how language skills affect this behavior. A total of 97 users from 21 different countries participated in this experiment, resulting in a rich data set including usage data as well as questionnaire feedback. Although participants reported feeling...

  19. Early relations between language development and the quality of mother-child interaction in very-low-birth-weight children.

    Science.gov (United States)

    Stolt, S; Korja, R; Matomäki, J; Lapinleimu, H; Haataja, L; Lehtonen, L

    2014-05-01

    It is not clearly understood how the quality of early mother-child interaction influences language development in very-low-birth-weight children (VLBW). We aim to analyze associations between early language and the quality of mother-child interaction, and, the predictive value of the features of early mother-child interaction on language development at 24 months of corrected age in VLBW children. A longitudinal prospective follow-up study design was used. The participants were 28 VLBW children and 34 full-term controls. Language development was measured using different methods at 6, 12 and at 24 months of age. The quality of mother-child interaction was assessed using PC-ERA method at 6 and at 12 months of age. Associations between the features of early interaction and language development were different in the groups of VLBW and full-term children. There were no significant correlations between the features of mother-child interaction and language skills when measured at the same age in the VLBW group. Significant longitudinal correlations were detected in the VLBW group especially if the quality of early interactions was measured at six months and language skills at 2 years of age. However, when the predictive value of the features of early interactions for later poor language performance was analyzed separately, the features of early interaction predicted language skills in the VLBW group only weakly. The biological factors may influence on the language development more in the VLBW children than in the full-term children. The results also underline the role of maternal and dyadic factors in early interactions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Terminology for the body in social interaction, as appearing in papers published in the journal 'Research on Language and Social Interaction', 1987-2013

    DEFF Research Database (Denmark)

    Nevile, Maurice Richard

    2016-01-01

    This is a list of terms referring generally to the body in descriptions and analyses of social interaction, as used by authors in papers published in ROLSI. The list includes over 200 items, grouped according to common phrasing and within alphabetical order. The list was compiled in preparation...... for the review paper: Nevile, M. (2015) The embodied turn in research on language and social interaction. Research on Language and Social Interaction,48(2): 121-151....

  1. English language learners with learning disabilities interacting in a science class within an inclusion setting

    Science.gov (United States)

    Ayala, Vivian Luz

    In today's schools there are by far more students identified with learning disabilities (LD) than with any other disability. The U.S. Department of Education in the year 1997--98 reported that there are 38.13% students with LD in our nations' schools (Smith, Polloway, Patton, & Dowdy, 2001; U.S. Department of Education, 1999). Of those, 1,198,200 are considered ELLs with LD (Baca & Cervantes. 1998). These figures which represent an increase evidence the need to provide these students with educational experiences geared to address both their academic and language needs (Ortiz, 1997; Ortiz, & Garcia, 1995). English language learners with LD must be provided with experiences in the least restrictive environment (LRE) and must be able to share the same kind of social and academic experiences as those students from the general population (Etscheidt & Bartlett, 1999; Lloyd, Kameenui, & Chard, 1997) The purpose of this research was to conduct a detailed qualitative study on classroom interactions to enhance the understanding of the science curriculum in order to foster the understanding of content and facilitate the acquisition of English as a second language (Cummins, 2000; Echevarria, Vogt, & Short, 2000). This study was grounded on the theories of socioconstructivism, second language acquisition, comprehensible input, and classroom interactions. The participants of the study were fourth and fifth grade ELLS with LD in a science elementary school bilingual inclusive setting. Data was collected through observations, semi-structured interviews (students and teacher), video and audio taping, field notes, document analysis, and the Classroom Observation Schedule (COS). The transcriptions of the video and audio tapes were coded to highlight emergent patterns on the type of interactions and language used by the participants. The findings of the study intend to provide information for teachers of ELLs with LD about the implications of using classroom interactions point to

  2. Inuit Sign Language: a contribution to sign language typology

    NARCIS (Netherlands)

    Schuit, J.; Baker, A.; Pfau, R.

    2011-01-01

    Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different

  3. Implications of Hegel's Theories of Language on Second Language Teaching

    Science.gov (United States)

    Wu, Manfred

    2016-01-01

    This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…

  4. Linguistic Landscape and Minority Languages

    Science.gov (United States)

    Cenoz, Jasone; Gorter, Durk

    2006-01-01

    This paper focuses on the linguistic landscape of two streets in two multilingual cities in Friesland (Netherlands) and the Basque Country (Spain) where a minority language is spoken, Basque or Frisian. The paper analyses the use of the minority language (Basque or Frisian), the state language (Spanish or Dutch) and English as an international…

  5. The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters.

    Science.gov (United States)

    Rempel, David; Camilleri, Matt J; Lee, David L

    2015-10-01

    The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input.

  6. Revising the worksheet with L3: a language and environment foruser-script interaction

    Energy Technology Data Exchange (ETDEWEB)

    Hohn, Michael H.

    2008-01-22

    This paper describes a novel approach to the parameter anddata handling issues commonly found in experimental scientific computingand scripting in general. The approach is based on the familiarcombination of scripting language and user interface, but using alanguage expressly designed for user interaction and convenience. The L3language combines programming facilities of procedural and functionallanguages with the persistence and need-based evaluation of data flowlanguages. It is implemented in Python, has access to all Pythonlibraries, and retains almost complete source code compatibility to allowsimple movement of code between the languages. The worksheet interfaceuses metadata produced by L3 to provide selection of values through thescriptit self and allow users to dynamically evolve scripts withoutre-running the prior versions. Scripts can be edited via text editors ormanipulated as structures on a drawing canvas. Computed values are validscripts and can be used further in other scripts via simplecopy-and-paste operations. The implementation is freely available underan open-source license.

  7. The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters

    Science.gov (United States)

    Rempel, David; Camilleri, Matt J.; Lee, David L.

    2015-01-01

    The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input. PMID:26028955

  8. Do Verbal Interactions with Infants During Electronic Media Exposure Mitigate Adverse Impacts on their Language Development as Toddlers?

    Science.gov (United States)

    Mendelsohn, Alan L; Brockmeyer, Carolyn A; Dreyer, Benard P; Fierman, Arthur H; Berkule-Silberman, Samantha B; Tomopoulos, Suzy

    2010-11-01

    The goal of this study was to determine whether verbal interactions between mothers and their 6-month-old infants during media exposure ('media verbal interactions') might have direct positive impacts, or mitigate any potential adverse impacts of media exposure, on language development at 14 months. For 253 low-income mother-infant dyads participating in a longitudinal study, media exposure and media verbal interactions were assessed using 24-hour recall diaries. Additionally, general level of cognitive stimulation in the home [StimQ] was assessed at 6 months and language development [Preschool Language Scale-4] was assessed at 14 months. Results suggest that media verbal interactions play a role in the language development of infants from low-income, immigrant families. Evidence showed that media verbal interactions moderated adverse impacts of media exposure found on 14-month language development, with adverse associations found only in the absence the these interactions. Findings also suggest that media verbal interactions may have some direct positive impacts on language development, in that media verbal interactions during the co-viewing of media with educational content (but not other content) were predictive of 14-month language independently of overall level of cognitive stimulation in the home.

  9. Social Interaction in Infants’ Learning of Second-Language Phonetics: An Exploration of Brain-Behavior Relations

    Science.gov (United States)

    Conboy, Barbara T.; Brooks, Rechele; Meltzoff, Andrew N.; Kuhl, Patricia K.

    2015-01-01

    Infants learn phonetic information from a second language with live-person presentations, but not television or audio-only recordings. To understand the role of social interaction in learning a second language, we examined infants’ joint attention with live, Spanish-speaking tutors and used a neural measure of phonetic learning. Infants’ eye-gaze behaviors during Spanish sessions at 9.5 – 10.5 months of age predicted second-language phonetic learning, assessed by an event-related potential (ERP) measure of Spanish phoneme discrimination at 11 months. These data suggest a powerful role for social interaction at the earliest stages of learning a new language. PMID:26179488

  10. Social Interaction in Infants' Learning of Second-Language Phonetics: An Exploration of Brain-Behavior Relations.

    Science.gov (United States)

    Conboy, Barbara T; Brooks, Rechele; Meltzoff, Andrew N; Kuhl, Patricia K

    2015-01-01

    Infants learn phonetic information from a second language with live-person presentations, but not television or audio-only recordings. To understand the role of social interaction in learning a second language, we examined infants' joint attention with live, Spanish-speaking tutors and used a neural measure of phonetic learning. Infants' eye-gaze behaviors during Spanish sessions at 9.5-10.5 months of age predicted second-language phonetic learning, assessed by an event-related potential measure of Spanish phoneme discrimination at 11 months. These data suggest a powerful role for social interaction at the earliest stages of learning a new language.

  11. Language Use of Frisian Bilingual Teenagers on Social Media

    NARCIS (Netherlands)

    Jongbloed-Faber, L.; Van de Velde, H.; van der Meer, C.; Klinkenberg, E.L.

    2016-01-01

    This paper explores the use of Frisian, a minority language spoken in the Dutch province of Fryslân, on social media by Frisian teenagers. Frisian is the mother tongue of 54% of the 650,000 inhabitants and is predominantly a spoken language: 64% of the Frisian population can speak it well, while

  12. Interactive Technologies of Foreign Language Teaching in Future Marine Specialists’ Training: from Experience of the Danube River Basin Universities

    Directory of Open Access Journals (Sweden)

    Olga Demchenko

    2015-08-01

    Full Text Available The article deals with the investigation of the interactive technologies of foreign language teaching in future marine specialists’ training in the Danube river basin universities. The author gives definitions of the most popular interactive technologies aimed to form communicative competence as a significant component of future mariners’ key competencies. Typology and analysis of some interactive technologies of foreign language teaching in future marine specialists’ training are provided.

  13. Predicting user mental states in spoken dialogue systems

    Science.gov (United States)

    Callejas, Zoraida; Griol, David; López-Cózar, Ramón

    2011-12-01

    In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.

  14. Predicting user mental states in spoken dialogue systems

    Directory of Open Access Journals (Sweden)

    Griol David

    2011-01-01

    Full Text Available Abstract In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.

  15. Artfulness in Young Children's Spoken Narratives

    Science.gov (United States)

    Glenn-Applegate, Katherine; Breit-Smith, Allison; Justice, Laura M.; Piasta, Shayne B.

    2010-01-01

    Research Findings: Artfulness is rarely considered as an indicator of quality in young children's spoken narratives. Although some studies have examined artfulness in the narratives of children 5 and older, no studies to date have focused on the artfulness of preschoolers' oral narratives. This study examined the artfulness of fictional spoken…

  16. Vowel and Consonant Replacements in the Spoken French of Ijebu Undergraduate French Learners in Selected Universities in South West of Nigeria

    Directory of Open Access Journals (Sweden)

    Iyiola Amos Damilare

    2015-04-01

    Full Text Available Substitution is a phonological process in language. Existing studies have examined deletion in several languages and dialects with less attention paid to the spoken French of Ijebu Undergraduates. This article therefore examined substitution as a dominant phenomenon in the spoken French of thirty-four Ijebu Undergraduate French Learners (IUFLs in Selected Universities in South West of Nigeria with a view to establishing the dominance of substitution in the spoken French of IUFLs. The data collection was through tape-recording of participants’ production of 30 sentences containing both French vowel and consonant sounds. The results revealed inappropriate replacement of vowel and consonant in the medial and final positions in the spoken French of IUFLs.

  17. Semiotic diversity in utterance production and the concept of 'language'.

    Science.gov (United States)

    Kendon, Adam

    2014-09-19

    Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  18. Teaching Spoken Discourse Markers Explicitly: A Comparison of III and PPP

    Science.gov (United States)

    Jones, Christian; Carter, Ronald

    2014-01-01

    This article reports on mixed methods classroom research carried out at a British university. The study investigates the effectiveness of two different explicit teaching frameworks, Illustration--Interaction--Induction (III) and Present--Practice--Produce (PPP) used to teach the same spoken discourse markers (DMs) to two different groups of…

  19. Evaluating spoken dialogue systems according to de-facto standards: A case study

    NARCIS (Netherlands)

    Möller, S.; Smeele, P.; Boland, H.; Krebber, J.

    2007-01-01

    In the present paper, we investigate the validity and reliability of de-facto evaluation standards, defined for measuring or predicting the quality of the interaction with spoken dialogue systems. Two experiments have been carried out with a dialogue system for controlling domestic devices. During

  20. Interaction before conflict and conflict resolution in pre-school boys with language impairment.

    Science.gov (United States)

    Horowitz, Laura; Jansson, Liselotte; Ljungberg, Tomas; Hedenbro, Monica

    2006-01-01

    Children with language impairment (LI) experience social difficulties, including conflict management. The factors involved in peer-conflict progression in pre-school children with LI, and which of these processes may differ from pre-school children with typical language development (TL), is therefore examined. To describe the relationship between opponents interacting before conflict, aberrant conflict causes, the conflict-resolution strategy reconciliation (i.e. friendly contact between former opponents shortly following conflict termination), and conflict outcome in the form of social interaction after a conflict has run its course. It is hypothesized that without social interaction before conflict, children with LI will experience increased difficulties attaining reconciliation. Unstructured play of 11 boys with LI (4-7 years old), at a specialized language pre-school, and 20 boys with TL (4-6 years old), at mainstream pre-schools, were video filmed. Conflicts were identified and recorded according to a validated coding system. Recorded conflict details include social interaction between conflict in the pre-conflict period, behavioural sequences constituting conflict cause (conflict period), reconciliatory behaviours in the post-conflict period, and social interaction between former opponents in the succeeding non-conflict period. The group's mean proportion of individual children's conflicts in which specific behavioural sequences occurred were calculated and compared between and within the groups. When conflicts with and without pre-conflict social interaction were analysed separately, aberrant caused conflicts occurred more often in LI group conflicts than in TL group conflicts. However, in conflicts without social interaction in the pre-conflict period, boys with LI exhibit reconciliatory behaviours in, and reconcile a comparatively smaller proportion of, conflicts. Social interaction in the succeeding non-conflict period was proportionately less for boys

  1. A Study on Motivation and Strategy Use of Bangladeshi University Students to Learn Spoken English

    OpenAIRE

    Mst. Moriam, Quadir

    2008-01-01

    This study discusses motivation and strategy use of university students to learn spoken English in Bangladesh. A group of 355 (187 males and 168 females) university students participated in this investigation. To measure learners' degree of motivation a modified version of questionnaire used by Schmidt et al. (1996) was administered. Participants reported their strategy use on a modified version of SILL, the Strategy Inventory for Language Learning, version 7.0 (Oxford, 1990). In order to fin...

  2. Parental mode of communication is essential for speech and language outcomes in cochlear implanted children

    DEFF Research Database (Denmark)

    Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina

    2010-01-01

    The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....

  3. The grammaticalization of gestures in sign languages

    NARCIS (Netherlands)

    van Loon, E.; Pfau, R.; Steinbach, M.; Müller, C.; Cienki, A.; Fricke, E.; Ladewig, S.H.; McNeill, D.; Bressem, J.

    2014-01-01

    Recent studies on grammaticalization in sign languages have shown that, for the most part, the grammaticalization paths identified in sign languages parallel those previously described for spoken languages. Hence, the general principles of grammaticalization do not depend on the modality of language

  4. Spoken sentence production in college students with dyslexia: working memory and vocabulary effects.

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J P

    2017-11-21

    Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.

  5. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  6. A descriptive study of the interaction behaviors in a language video program and in live elementary language classes using that video program

    OpenAIRE

    Lopes, Solange Aparecida

    1996-01-01

    The primary purpose of this study was to describe: 1) the predominant types of interaction behaviors encountered in a foreign language video program; and 2) the types of teacher-student interaction features that resulted from use of the instructional video in elementary school classrooms. Based on the findings, the second purpose of the study was to examine how these interaction behaviors shaped amount of teacher and student talk in the two sources of data. The researcher exami...

  7. Using Language Games as a Way to Investigate Interactional Engagement in Human-Robot Interaction

    DEFF Research Database (Denmark)

    Jensen, L. C.

    2016-01-01

    Social robots are employed in many classrooms and have been shown to aid learning. However, studies show that while schools intend for these robots to be social actors, they are not treated as such by the students. As the social factor is crucial for interactional engagement, this paper discusses...

  8. The pragmatic language abilities of children with ADHD following a play-based intervention involving peer-to-peer interactions.

    Science.gov (United States)

    Cordier, Reinie; Munro, Natalie; Wilkes-Gillan, Sarah; Docking, Kimberley

    2013-08-01

    Children with Attention Deficit Hyperactivity Disorder (ADHD) commonly experience significant pragmatic language deficits which put them at risk of developing emotional and social difficulties. This study aimed to examine the pragmatic language exhibited in a peer-to-peer interaction between the children with ADHD and their playmates following a pilot play-based intervention. Participants were children (aged 5-11 years) diagnosed as having ADHD (n = 14) and their self-selected typically-developing playmate. Pragmatic language was measured using the Pragmatic Protocol (PP) and the Structured Multidimensional Assessment Profiles (S-MAPs). Children's structural language was also screened and compared against their pragmatic language skills pre-post play-based intervention. The pragmatic language of children with ADHD improved significantly from pre-post intervention as measured by both the PP and S-MAPs. Both children with and without structural language difficulties improved significantly from pre- to post-intervention using S-MAPs; only children with structural language difficulties improved significantly using PP. The findings support the notion that pragmatic skills may improve following a play-based intervention that is characterized by didactic social interaction. As pragmatic language is a complex construct, it is proposed that clinicians and researchers reconsider the working definition of pragmatic language and the operationalization thereof in assessments.

  9. Turkish Mothers' Verbal Interaction Practices and Self-Efficacy Beliefs regarding Their Children with Expressive Language Delay

    Science.gov (United States)

    Diken, Ibrahim H.; Diken, Ozlem

    2008-01-01

    The purpose of this study was to explore Turkish mothers' verbal interaction practices and their maternal self-efficacy beliefs regarding their children with expressive language delay. Participants included 33 Turkish mothers of children with expressive language delay. Results indicated that mothers in general use child directed talk strategies or…

  10. Teletandem language learning in a technological context of education: interactions between Brazilian and German students

    Directory of Open Access Journals (Sweden)

    Suelene Vaz da SILVA

    2015-12-01

    Full Text Available ABSTRACT This paper presents data from a computer-mediated communication study conducted between a group of Brazilian university students - from Instituto Federal de Educação, Ciência e Tecnologia do Estado de Goiás, Campus Goiânia, Goiás, Brazil - who wanted to learn English, and a group of German university students - from the University of Worms, in Germany - who wanted to learn Portuguese. The cross-cultural bilingual communication was conducted in the second semester of 2010 and involved discussions on environmental issues. Adopting a qualitative perspective in the analysis, the data were derived from conversation sessions through a webconferencing software known as Openmeetings and through e-mails and some written activities developed by the students. All these were analyzed by means of sociocultural theory. Among the conclusions we reached, we observed that the participants used the software features to help them in their language learning process, discussed issues related to environmental science, as well as topics related to their personal and academic life. Regarding the languages used, the participants used English during the teletandem sessions as an anchoring language to assist their partners in learning English itself and Portuguese, as well as introduced the German language in the interaction sessions.

  11. Child-Robot Interactions for Second Language Tutoring to Preschool Children

    Directory of Open Access Journals (Sweden)

    Paul Vogt

    2017-03-01

    Full Text Available In this digital age social robots will increasingly be used for educational purposes, such as second language tutoring. In this perspective article, we propose a number of design features to develop a child-friendly social robot that can effectively support children in second language learning, and we discuss some technical challenges for developing these. The features we propose include choices to develop the robot such that it can act as a peer to motivate the child during second language learning and build trust at the same time, while still being more knowledgeable than the child and scaffolding that knowledge in adult-like manner. We also believe that the first impressions children have about robots are crucial for them to build trust and common ground, which would support child-robot interactions in the long term. We therefore propose a strategy to introduce the robot in a safe way to toddlers. Other features relate to the ability to adapt to individual children’s language proficiency, respond contingently, both temporally and semantically, establish joint attention, use meaningful gestures, provide effective feedback and monitor children’s learning progress. Technical challenges we observe include automatic speech recognition (ASR for children, reliable object recognition to facilitate semantic contingency and establishing joint attention, and developing human-like gestures with a robot that does not have the same morphology humans have. We briefly discuss an experiment in which we investigate how children respond to different forms of feedback the robot can give.

  12. Spoken commands control robot that handles radioactive materials

    International Nuclear Information System (INIS)

    Phelan, P.F.; Keddy, C.; Beugelsdojk, T.J.

    1989-01-01

    Several robotic systems have been developed by Los Alamos National Laboratory to handle radioactive material. Because of safety considerations, the robotic system must be under direct human supervision and interactive control continuously. In this paper, we describe the implementation of a voice-recognition system that permits this control, yet allows the robot to perform complex preprogrammed manipulations without the operator's intervention. To provide better interactive control, we connected to the robot's control computer, a speech synthesis unit, which provides audible feedback to the operator. Thus upon completion of a task or if an emergency arises, an appropriate spoken message can be reported by the control computer. The training programming and operation of this commercially available system are discussed, as are the practical problems encountered during operations

  13. Effects of speech- and text-based interaction modes in natural language human-computer dialogue.

    Science.gov (United States)

    Le Bigot, Ludovic; Rouet, Jean-François; Jamet, Eric

    2007-12-01

    This study examined the effects of user production (speaking and typing) and user reception (listening and reading) modes on natural language human-computer dialogue. Text-based dialogue is often more efficient than speech-based dialogue, but the latter is more dynamic and more suitable for mobile environments and hands-busy situations. The respective contributions of user production and reception modes have not previously been assessed. Eighteen participants performed several information search tasks using a natural language information system in four experimental conditions: phone (speaking and listening), Web (typing and reading), and mixed (speaking and reading or typing and listening). Mental workload was greater and participants' repetitions of commands were more frequent when speech (speaking or listening) was used for both the user production and reception modes rather than text (typing or reading). Completion times were longer for listening than for reading. Satisfaction was lower, utterances were longer, and the interaction error rate was higher for speaking than typing. The production and reception modes both contribute to dialogue and mental workload. They have distinct contributions to performance, satisfaction, and the form of the discourse. The most efficient configuration for interacting in natural language would appear to be speech for production and system prompts in text, as this combination decreases the time on task while improving dialogue involvement.

  14. Dust, a spoken word poem by Guante

    Directory of Open Access Journals (Sweden)

    Kyle Tran Myhre

    2017-06-01

    Full Text Available In "Dust," spoken word poet Kyle "Guante" Tran Myhre crafts a multi-vocal exploration of the connections between the internment of Japanese Americans during World War II and the current struggles against xenophobia in general and Islamophobia specifically. Weaving together personal narrative, quotes from multiple voices, and "verse journalism" (a term coined by Gwendolyn Brooks, the poem seeks to bridge past and present in order to inform a more just future.

  15. Situated dialog in speech-based human-computer interaction

    CERN Document Server

    Raux, Antoine; Lane, Ian; Misu, Teruhisa

    2016-01-01

    This book provides a survey of the state-of-the-art in the practical implementation of Spoken Dialog Systems for applications in everyday settings. It includes contributions on key topics in situated dialog interaction from a number of leading researchers and offers a broad spectrum of perspectives on research and development in the area. In particular, it presents applications in robotics, knowledge access and communication and covers the following topics: dialog for interacting with robots; language understanding and generation; dialog architectures and modeling; core technologies; and the analysis of human discourse and interaction. The contributions are adapted and expanded contributions from the 2014 International Workshop on Spoken Dialog Systems (IWSDS 2014), where researchers and developers from industry and academia alike met to discuss and compare their implementation experiences, analyses and empirical findings.

  16. Learn English or die: The effects of digital games on interaction and willingness to communicate in a foreign language

    Directory of Open Access Journals (Sweden)

    Hayo Reinders

    2011-04-01

    Full Text Available In recent years there has been a lot of interest in the potential role of digital games in language education. Playing digital games is said to be motivating to students and to benefit the development of social skills, such as collaboration, and metacognitive skills such as planning and organisation. An important potential benefit is also that digital games encourage the use of the target language in a non-threatening environment. Willingness to communicate has been shown to affect second language acquisition in a number of ways and it is therefore important to investigate if there is a connection between playing games and learners’ interaction in the target language. In this article we report on the results of a pilot study that investigated the effects of playing an online multiplayer game on the quantity and quality of second language interaction in the game and on participants’ willingness to communicate in the target language. We will show that digital games can indeed affect second language interaction patterns and contribute to second language acquisition, but that this depends, like in all other teaching and learning environments, on careful pedagogic planning of the activity.

  17. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning.

    Science.gov (United States)

    Kuhl, Patricia K; Tsao, Feng-Ming; Liu, Huei-Mei

    2003-07-22

    Infants acquire language with remarkable speed, although little is known about the mechanisms that underlie the acquisition process. Studies of the phonetic units of language have shown that early in life, infants are capable of discerning differences among the phonetic units of all languages, including native- and foreign-language sounds. Between 6 and 12 mo of age, the ability to discriminate foreign-language phonetic units sharply declines. In two studies, we investigate the necessary and sufficient conditions for reversing this decline in foreign-language phonetic perception. In Experiment 1, 9-mo-old American infants were exposed to native Mandarin Chinese speakers in 12 laboratory sessions. A control group also participated in 12 language sessions but heard only English. Subsequent tests of Mandarin speech perception demonstrated that exposure to Mandarin reversed the decline seen in the English control group. In Experiment 2, infants were exposed to the same foreign-language speakers and materials via audiovisual or audio-only recordings. The results demonstrated that exposure to recorded Mandarin, without interpersonal interaction, had no effect. Between 9 and 10 mo of age, infants show phonetic learning from live, but not prerecorded, exposure to a foreign language, suggesting a learning process that does not require long-term listening and is enhanced by social interaction.

  18. Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language

    Science.gov (United States)

    Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary

    2015-01-01

    Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…

  19. Family Language Policy and School Language Choice: Pathways to Bilingualism and Multilingualism in a Canadian Context

    Science.gov (United States)

    Slavkov, Nikolay

    2017-01-01

    This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…

  20. Micro Language Planning and Cultural Renaissance in Botswana

    Science.gov (United States)

    Alimi, Modupe M.

    2016-01-01

    Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…

  1. How mother tongue and the second language interact with acquisition of a foreign language for year six students

    DEFF Research Database (Denmark)

    Slåttvik, Anja; Nielsen, Henrik Balle

    This is a presentation of a current study of how teaching of fiction is carried out in the subjectEnglish as a foreign language in year six in two Danish Schools. There is a particular focus on 6multilingual students and their third language acquisition perspective. The aim is to establishknowledge...... on multilingual students’ understanding of material and content in the EFLclassroom and on a long-term basis to focus foreign language teachers’ attention tocircumstances that challenge students learning a foreign language in a multilingualenvironment....

  2. Computer Assisted Testing of Spoken English: A Study of the SFLEP College English Oral Test System in China

    Directory of Open Access Journals (Sweden)

    John Lowe

    2009-06-01

    Full Text Available This paper reports on the on-going evaluation of a computer-assisted system (CEOTS for the assessing of spoken English skills among Chinese university students. This system is being developed to deal with the negative backwash effects of the present system of assessment of speaking skills which is only available to a tiny minority. We present data from a survey of students at the developing institution (USTC, with follow-up interviews and further interviews with English language teachers, to gauge the reactions to the test and its impact on language learning. We identify the key issue as being one of validity, with a tension existing between construct and consequential validities of the existing system and of CEOTS. We argue that a computer-based system seems to offer the only solution to the negative backwash problem but the development of the technology required to meet current construct validity demands makes this a very long term prospect. We suggest that a compromise between the competing forms of validity must therefore be accepted, probably well before a computer-based system can deliver the level of interaction with the examinees that would emulate the present face-to-face mode.

  3. The Lightening Veil: Language Revitalization in Wales

    Science.gov (United States)

    Williams, Colin H.

    2014-01-01

    The Welsh language, which is indigenous to Wales, is one of six Celtic languages. It is spoken by 562,000 speakers, 19% of the population of Wales, according to the 2011 U.K. Census, and it is estimated that it is spoken by a further 200,000 residents elsewhere in the United Kingdom. No exact figures exist for the undoubted thousands of other…

  4. Effects of speech clarity on recognition memory for spoken sentences.

    Science.gov (United States)

    Van Engen, Kristin J; Chandrasekaran, Bharath; Smiljanic, Rajka

    2012-01-01

    Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.

  5. Language Use in Real-time Interactions during Early Elementary Science Lessons: The Bidirectional Dynamics of the Language Complexity of Teachers and Students

    Science.gov (United States)

    Menninga, Astrid; van Dijk, Marijn; Steenbeek, Henderien; van Geert, Paul

    2017-01-01

    This study used a dynamic approach to explore bidirectional sequential relations between the real-time language use of teachers and students in naturalistic early elementary science lessons. It also compared experienced teachers (n = 22) with novice teachers (n = 8) with respect to such relations. Verbal interactions were transcribed and coded at…

  6. Patterns of Negotiation of Meaning in English as Second Language Learners’ Interactions

    Directory of Open Access Journals (Sweden)

    Ebrahim Samani

    2015-02-01

    Full Text Available Problem Statement: The Internet as a tool that presents many challenges has drawn the attention of researchers in the field of education and especially foreign language teaching. However, there has been a lack of information about the true nature of these environments. In recent years, determination of the patterns of negotiation of meaning as a way to delve in these environments has grown in popularity. Purpose of the Study: The current study was an effort to determine the types and frequencies of negotiation of meaning in the interaction of Malaysian students as English as a second language learners and, furthermore, to compare findings of this study with correspondent previous studies.  To this end, two research questions were posed for this study: (a what types of negotiation of meaning emerge in text-based synchronous CMC environments? and (b Are there any differences between findings of this study and previous studies in terms of negotiation of meaning functions in this environment?  Method: Participants of this study were fourteen English as second language learners at Universiti Putra Malaysia (UPM. They were involved in a series of discussions over selected short stories. Analysis of students’ chat logs was carried out through computer - mediated discourse analysis (CMDA. Findings and Results: This study yielded 10 types of functions in negotiation of meaning, which are clarification request, confirmation, confirmation check, correction or self correction, elaboration, elaboration request, reply clarification or definition, reply confirmation, reply elaboration, and vocabulary check.  Furthermore, findings of this study indicated that students negotiated with an average of 2.10 per 100 words. According to the findings of this study, the most - frequently used functions were confirmation, elaboration, and elaboration request and the least frequently used functions were vocabulary check, reply confirmation, and reply clarification

  7. Computational Interpersonal Communication: Communication Studies and Spoken Dialogue Systems

    Directory of Open Access Journals (Sweden)

    David J. Gunkel

    2016-09-01

    Full Text Available With the advent of spoken dialogue systems (SDS, communication can no longer be considered a human-to-human transaction. It now involves machines. These mechanisms are not just a medium through which human messages pass, but now occupy the position of the other in social interactions. But the development of robust and efficient conversational agents is not just an engineering challenge. It also depends on research in human conversational behavior. It is the thesis of this paper that communication studies is best situated to respond to this need. The paper argues: 1 that research in communication can supply the information necessary to respond to and resolve many of the open problems in SDS engineering, and 2 that the development of SDS applications can provide the discipline of communication with unique opportunities to test extant theory and verify experimental results. We call this new area of interdisciplinary collaboration “computational interpersonal communication” (CIC

  8. At grammatical faculty of language, flies outsmart men.

    Science.gov (United States)

    Stoop, Ruedi; Nüesch, Patrick; Stoop, Ralph Lukas; Bunimovich, Leonid A

    2013-01-01

    Using a symbolic dynamics and a surrogate data approach, we show that the language exhibited by common fruit flies Drosophila ('D.') during courtship is as grammatically complex as the most complex human-spoken modern languages. This finding emerges from the study of fifty high-speed courtship videos (generally of several minutes duration) that were visually frame-by-frame dissected into 37 fundamental behavioral elements. From the symbolic dynamics of these elements, the courtship-generating language was determined with extreme confidence (significance level > 0.95). The languages categorization in terms of position in Chomsky's hierarchical language classification allows to compare Drosophila's body language not only with computer's compiler languages, but also with human-spoken languages. Drosophila's body language emerges to be at least as powerful as the languages spoken by humans.

  9. Schools and Languages in India.

    Science.gov (United States)

    Harrison, Brian

    1968-01-01

    A brief review of Indian education focuses on special problems caused by overcrowded schools, insufficient funding, and the status of education itself in the Indian social structure. Language instruction in India, a complex issue due largely to the numerous official languages currently spoken, is commented on with special reference to the problem…

  10. Language Philosophy in the context of knowledge organization in the interactive virtual platform

    Directory of Open Access Journals (Sweden)

    Luciana De Souza Gracioso

    2012-12-01

    Full Text Available Over the past years we have pursued epistemological paths that enabled us to reflect on the meaning of language as information, especially in the interactive virtual environments. The main objective of this investigation did not specifically aim at the identification or development of methodological tools, but rather the configuration of a theoretical discourse framework about the pragmatic epistemological possibilities of study and research in the Science of Information within the context of information actions in virtual technology. Thus, we present our thoughts and conjectures about the prerogatives and the obstacles encountered in that theoretical path, concluding with some communicative implications that are inherent to the meaning of information from its use, which in turn, configure the informational activities on the Internet with regard to the existing interactive platforms, better known as Web 2.0, or Pragmatic Web.

  11. When words fail us: insights into language processing from developmental and acquired disorders.

    Science.gov (United States)

    Bishop, Dorothy V M; Nation, Kate; Patterson, Karalyn

    2014-01-01

    Acquired disorders of language represent loss of previously acquired skills, usually with relatively specific impairments. In children with developmental disorders of language, we may also see selective impairment in some skills; but in this case, the acquisition of language or literacy is affected from the outset. Because systems for processing spoken and written language change as they develop, we should beware of drawing too close a parallel between developmental and acquired disorders. Nevertheless, comparisons between the two may yield new insights. A key feature of connectionist models simulating acquired disorders is the interaction of components of language processing with each other and with other cognitive domains. This kind of model might help make sense of patterns of comorbidity in developmental disorders. Meanwhile, the study of developmental disorders emphasizes learning and change in underlying representations, allowing us to study how heterogeneity in cognitive profile may relate not just to neurobiology but also to experience. Children with persistent language difficulties pose challenges both to our efforts at intervention and to theories of learning of written and spoken language. Future attention to learning in individuals with developmental and acquired disorders could be of both theoretical and applied value.

  12. Spoken language identification system adaptation in under-resourced environments

    CSIR Research Space (South Africa)

    Kleynhans, N

    2013-12-01

    Full Text Available Speech Recognition (ASR) systems in the developing world is severely inhibited. Given that few task-specific corpora exist and speech technology systems perform poorly when deployed in a new environment, we investigate the use of acoustic model adaptation...

  13. Error Awareness and Recovery in Conversational Spoken Language Interfaces

    Science.gov (United States)

    2007-05-01

    stutters , false starts, repairs, hesitations, filled pauses, and various other non-lexical acoustic events. Under these circumstances, it is not...sensible choice from a software engineering perspective. The case for separating out various task-independent aspects of the conversation has in fact been...in behav- ior both within and across systems. It also represents a more sensible solution from a software engi- The RavenClaw error handling

  14. Pronoun forms and courtesy in spoken language in Tunja, Colombia

    Directory of Open Access Journals (Sweden)

    Gloria Avendaño de Barón

    2014-05-01

    Full Text Available This article presents the results of a research project whose aims were the following: to determine the frequency of the use of pronoun forms in polite treatment sumercé, usted and tú, according to differences in gender, age and level of education, among speakers in Tunja; to describe the sociodiscursive variations and to explain the relationship between usage and courtesy. The methodology of the Project for the Sociolinguistic Study of Spanish in Spain and in Latin America (PRESEEA was used, and a sample of 54 speakers was taken. The results indicate that the most frequently used pronoun in Tunja to express friendliness and affection is sumercé, followed by usted and tú; women and men of different generations and levels of education alternate the use of these three forms in the context of narrative, descriptive, argumentative and explanatory speech.

  15. [Assessment of pragmatics from verbal spoken data].

    Science.gov (United States)

    Gallardo-Paúls, B

    2009-02-27

    Pragmatic assessment is usually complex, long and sophisticated, especially for professionals who lack specific linguistic education and interact with impaired speakers. To design a quick method of assessment that will provide a quick general evaluation of the pragmatic effectiveness of neurologically affected speakers. This first filter will allow us to decide whether a detailed analysis of the altered categories should follow. Our starting point was the PerLA (perception, language and aphasia) profile of pragmatic assessment designed for the comprehensive analysis of conversational data in clinical linguistics; this was then converted into a quick questionnaire. A quick protocol of pragmatic assessment is proposed and the results found in a group of children with attention deficit hyperactivity disorder are discussed.

  16. Language and Interactional Discourse: Deconstrusting the Talk- Generating Machinery in Natural Convresation

    Directory of Open Access Journals (Sweden)

    Amaechi Uneke Enyi

    2015-08-01

    Full Text Available The study entitled. “Language and Interactional Discourse: Deconstructing the Talk - Generating Machinery in Natural Conversation,” is an analysis of spontaneous and informal conversation. The study, carried out in the theoretical and methodological tradition of Ethnomethodology, was aimed at explicating how ordinary talk is organized and produced, how people coordinate their talk –in- interaction, how meanings are determined, and the role of talk in the wider social processes. The study followed the basic assumption of conversation analysis which is, that talk is not just a product of two ‘speakers - hearers’ who attempt to exchange information or convey messages to each other. Rather, participants in conversation are seen to be mutually orienting to, and collaborating in order to achieve orderly and meaningful communication. The analytic objective is therefore to make clear these procedures on which speakers rely to produce utterances and by which they make sense of other speakers’ talk. The datum used for this study was a recorded informal conversation between two (and later three middle- class civil servants who are friends. The recording was done in such a way that the participants were not aware that they were being recorded. The recording was later transcribed in a way that we believe is faithful to the spontaneity and informality of the talk. Our finding showed that conversation has its own features and is an ordered and structured social day by- day event. Specifically, utterances are designed and informed by organized procedures, methods and resources which are tied to the contexts in which they are produced, and which participants are privy to by virtue of their membership of a culture or a natural language community.  Keywords: Language, Discourse and Conversation

  17. Interactions of cultures and top people of Wikipedia from ranking of 24 language editions.

    Science.gov (United States)

    Eom, Young-Ho; Aragón, Pablo; Laniado, David; Kaltenbrunner, Andreas; Vigna, Sebastiano; Shepelyansky, Dima L

    2015-01-01

    Wikipedia is a huge global repository of human knowledge that can be leveraged to investigate interwinements between cultures. With this aim, we apply methods of Markov chains and Google matrix for the analysis of the hyperlink networks of 24 Wikipedia language editions, and rank all their articles by PageRank, 2DRank and CheiRank algorithms. Using automatic extraction of people names, we obtain the top 100 historical figures, for each edition and for each algorithm. We investigate their spatial, temporal, and gender distributions in dependence of their cultural origins. Our study demonstrates not only the existence of skewness with local figures, mainly recognized only in their own cultures, but also the existence of global historical figures appearing in a large number of editions. By determining the birth time and place of these persons, we perform an analysis of the evolution of such figures through 35 centuries of human history for each language, thus recovering interactions and entanglement of cultures over time. We also obtain the distributions of historical figures over world countries, highlighting geographical aspects of cross-cultural links. Considering historical figures who appear in multiple editions as interactions between cultures, we construct a network of cultures and identify the most influential cultures according to this network.

  18. Interactions of cultures and top people of Wikipedia from ranking of 24 language editions.

    Directory of Open Access Journals (Sweden)

    Young-Ho Eom

    Full Text Available Wikipedia is a huge global repository of human knowledge that can be leveraged to investigate interwinements between cultures. With this aim, we apply methods of Markov chains and Google matrix for the analysis of the hyperlink networks of 24 Wikipedia language editions, and rank all their articles by PageRank, 2DRank and CheiRank algorithms. Using automatic extraction of people names, we obtain the top 100 historical figures, for each edition and for each algorithm. We investigate their spatial, temporal, and gender distributions in dependence of their cultural origins. Our study demonstrates not only the existence of skewness with local figures, mainly recognized only in their own cultures, but also the existence of global historical figures appearing in a large number of editions. By determining the birth time and place of these persons, we perform an analysis of the evolution of such figures through 35 centuries of human history for each language, thus recovering interactions and entanglement of cultures over time. We also obtain the distributions of historical figures over world countries, highlighting geographical aspects of cross-cultural links. Considering historical figures who appear in multiple editions as interactions between cultures, we construct a network of cultures and identify the most influential cultures according to this network.

  19. Interactions of Cultures and Top People of Wikipedia from Ranking of 24 Language Editions

    Science.gov (United States)

    Eom, Young-Ho; Aragón, Pablo; Laniado, David; Kaltenbrunner, Andreas; Vigna, Sebastiano; Shepelyansky, Dima L.

    2015-01-01

    Wikipedia is a huge global repository of human knowledge that can be leveraged to investigate interwinements between cultures. With this aim, we apply methods of Markov chains and Google matrix for the analysis of the hyperlink networks of 24 Wikipedia language editions, and rank all their articles by PageRank, 2DRank and CheiRank algorithms. Using automatic extraction of people names, we obtain the top 100 historical figures, for each edition and for each algorithm. We investigate their spatial, temporal, and gender distributions in dependence of their cultural origins. Our study demonstrates not only the existence of skewness with local figures, mainly recognized only in their own cultures, but also the existence of global historical figures appearing in a large number of editions. By determining the birth time and place of these persons, we perform an analysis of the evolution of such figures through 35 centuries of human history for each language, thus recovering interactions and entanglement of cultures over time. We also obtain the distributions of historical figures over world countries, highlighting geographical aspects of cross-cultural links. Considering historical figures who appear in multiple editions as interactions between cultures, we construct a network of cultures and identify the most influential cultures according to this network. PMID:25738291

  20. Word recognition strategies amongst isiXhosa/English bilingual learners: The interaction of orthography and language of learning and teaching

    Directory of Open Access Journals (Sweden)

    Tracy Probert

    2016-05-01

    Full Text Available Word recognition is a major component of fluent reading and involves an interaction of language structure, orthography, and metalinguistic skills. This study examined reading strategies in isiXhosa and the transfer of these strategies to an additional language, English. IsiXhosa was chosen because of its agglutinative structure and conjunctive orthography. Data was collected at two schools which differed with regards to their language of learning and teaching (LoLT in the first three years of schooling: isiXhosa and English respectively. Participants completed a wordand pseudo-word reading aloud task in each of two languages which hypothetically impose different cognitive demands. Skills transfer occurs to a limited extent when the language of first literacy uses a transparent orthography, but is less predictable when the language of first literacy uses an opaque orthography. We show that although there is transfer of word recognition strategies from transparent to deep orthographies, felicitous transfer is limited to sublexical strategies; infelicitous transfer also occurs when lexical strategies are transferred in problematic ways. The results support the contention that reading strategies and cognitive skills are fine tuned to particular languages. This study emphasises that literacies in different languages present readers with different structural puzzles which require language-particular suites of cognitive reading skills.   Keywords: Foundation phase education; multilingual education; reading; word recognition; automaticity; isiXhosa reading

  1. Word recognition strategies amongst isiXhosa/English bilingual learners: The interaction of orthography and language of learning and teaching

    Directory of Open Access Journals (Sweden)

    Tracy Probert

    2016-03-01

    Full Text Available Word recognition is a major component of fluent reading and involves an interaction of language structure, orthography, and metalinguistic skills. This study examined reading strategies in isiXhosa and the transfer of these strategies to an additional language, English. IsiXhosa was chosen because of its agglutinative structure and conjunctive orthography. Data was collected at two schools which differed with regards to their language of learning and teaching (LoLT in the first three years of schooling: isiXhosa and English respectively. Participants completed a wordand pseudo-word reading aloud task in each of two languages which hypothetically impose different cognitive demands. Skills transfer occurs to a limited extent when the language of first literacy uses a transparent orthography, but is less predictable when the language of first literacy uses an opaque orthography. We show that although there is transfer of word recognition strategies from transparent to deep orthographies, felicitous transfer is limited to sublexical strategies; infelicitous transfer also occurs when lexical strategies are transferred in problematic ways. The results support the contention that reading strategies and cognitive skills are fine tuned to particular languages. This study emphasises that literacies in different languages present readers with different structural puzzles which require language-particular suites of cognitive reading skills. Keywords: Foundation phase education; multilingual education; reading; word recognition; automaticity; isiXhosa reading

  2. Recording voiceover the spoken word in media

    CERN Document Server

    Blakemore, Tom

    2015-01-01

    The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f

  3. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    This paper addresses the problem of information and service accessibility in mobile devices with limited resources. A solution is developed and tested through a prototype that applies state-of-the-art Distributed Speech Recognition (DSR) and knowledge-based Information Retrieval (IR) processing...... for spoken query answering. For the DSR part, a configurable DSR system is implemented on the basis of the ETSI-DSR advanced front-end and the SPHINX IV recognizer. For the knowledge-based IR part, a distributed system solution is developed for fast retrieval of the most relevant documents, with a text...

  4. Elementary School Students’ Spoken Activities and their Responses in Math Learning by Peer-Tutoring

    Directory of Open Access Journals (Sweden)

    Baiduri

    2017-04-01

    Full Text Available Students’ activities in the learning process are very important to indicate the quality of learning process. One of which is spoken activity. This study was intended to analyze the elementary school students’ spoken activities and their responses in joining Math learning process by peer-tutoring. Descriptive qualitative design was piloted by means of implementing the qualitative approach and case study. Further, the data were collected from observation, field note, interview, and questionnaire that were administered to 24 fifth-graders of First State Elementary School of Kunjang, Kediri, East Java Indonesia. The design was that four students were recruited as the tutors; while the rest was subdivided into four different groups. The data taken from the observation and questionnaire were analyzed descriptively which were later categorized into various categories starting from poor category to the excellent one. The data collected from the interview were analyzed through the interactive model, data reduction, data exposing, and summation. The findings exhibited that the tutors’ spoken activities covering: questioning, answering, explaining, discussing, and presenting, were improved during three meetings and sharply developed in general. In addition, the students’ spoken activities that engaged some groups were considered good. Besides, there was a linear and positive interconnectedness between tutors’ activity and their groups’ activities.

  5. The interactional management of ‘language difficulties’ at work – L2 strategies for responding to explicit inquiries about understanding

    DEFF Research Database (Denmark)

    Tranekjær, Louise

    2017-01-01

    ) of how employers in internship interviews orient to internship candidates as members of the category ‘second language speaker’, this paper examines the strategies employed by second languages speakers for refuting suspected language difficulties. Inspired by the training method CARM (Stokoe, 2011; 2013...... communication by illuminating not only the interactional trajectories of inquiries about understanding but also the interactional resources available to second language speakers of effectively ensuring intersubjectivity....

  6. Language and literacy development of deaf and hard-of-hearing children: successes and challenges.

    Science.gov (United States)

    Lederberg, Amy R; Schick, Brenda; Spencer, Patricia E

    2013-01-01

    Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to early identification/intervention, advanced technologies (e.g., cochlear implants), and perceptually accessible language models. DHH children develop sign language in a similar manner as hearing children develop spoken language, provided they are in a language-rich environment. This occurs naturally for DHH children of deaf parents, who constitute 5% of the deaf population. For DHH children of hearing parents, sign language development depends on the age that they are exposed to a perceptually accessible 1st language as well as the richness of input. Most DHH children are born to hearing families who have spoken language as a goal, and such development is now feasible for many children. Some DHH children develop spoken language in bilingual (sign-spoken language) contexts. For the majority of DHH children, spoken language development occurs in either auditory-only contexts or with sign supports. Although developmental trajectories of DHH children with hearing parents have improved with early identification and appropriate interventions, the majority of children are still delayed compared with hearing children. These DHH children show particular weaknesses in the development of grammar. Language deficits and differences have cascading effects in language-related areas of development, such as theory of mind and literacy development.

  7. Tagalog for Beginners. PALI Language Texts: Philippines.

    Science.gov (United States)

    Ramos, Teresita V.; de Guzman, Videa

    This language textbook is designed for beginning students of Tagalog, the principal language spoken on the island of Luzon in the Philippines. The introduction discusses the history of Tagalog and certain features of the language. An explanation of the text is given, along with notes for the teacher. The text itself is divided into nine sections:…

  8. Approaches for Language Identification in Mismatched Environments

    Science.gov (United States)

    2016-09-08

    domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...Approaches for Language Identification in Mismatched Environments Shahan Nercessian, Pedro Torres-Carrasquillo, and Gabriel Martínez-Montes...consider the task of language identification in the context of mismatch conditions. Specifically, we address the issue of using unlabeled data in the

  9. Audience Effects in American Sign Language Interpretation

    Science.gov (United States)

    Weisenberg, Julia

    2009-01-01

    There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…

  10. Event Related Potential Study of Language Interaction in Bilingual Aphasia Patients.

    Science.gov (United States)

    Khachatryan, Elvira; Wittevrongel, Benjamin; De Keyser, Kim; De Letter, Miet; Hulle, Marc M Van

    2018-01-01

    interaction between pre- and post-morbid L2 proficiency, pre- and post-morbid L2 exposure, impairment and the presented stimulus (inter-lingual homographs). Our ERP study complements the usually adopted behavioral approach by providing new insights into language interactions on the level of individual linguistic components in bilingual patients with aphasia.

  11. Integrating natural language processing and web GIS for interactive knowledge domain visualization

    Science.gov (United States)

    Du, Fangming

    Recent years have seen a powerful shift towards data-rich environments throughout society. This has extended to a change in how the artifacts and products of scientific knowledge production can be analyzed and understood. Bottom-up approaches are on the rise that combine access to huge amounts of academic publications with advanced computer graphics and data processing tools, including natural language processing. Knowledge domain visualization is one of those multi-technology approaches, with its aim of turning domain-specific human knowledge into highly visual representations in order to better understand the structure and evolution of domain knowledge. For example, network visualizations built from co-author relations contained in academic publications can provide insight on how scholars collaborate with each other in one or multiple domains, and visualizations built from the text content of articles can help us understand the topical structure of knowledge domains. These knowledge domain visualizations need to support interactive viewing and exploration by users. Such spatialization efforts are increasingly looking to geography and GIS as a source of metaphors and practical technology solutions, even when non-georeferenced information is managed, analyzed, and visualized. When it comes to deploying spatialized representations online, web mapping and web GIS can provide practical technology solutions for interactive viewing of knowledge domain visualizations, from panning and zooming to the overlay of additional information. This thesis presents a novel combination of advanced natural language processing - in the form of topic modeling - with dimensionality reduction through self-organizing maps and the deployment of web mapping/GIS technology towards intuitive, GIS-like, exploration of a knowledge domain visualization. A complete workflow is proposed and implemented that processes any corpus of input text documents into a map form and leverages a web

  12. Interactive language learning by robots: the transition from babbling to word forms.

    Science.gov (United States)

    Lyon, Caroline; Nehaniv, Chrystopher L; Saunders, Joe

    2012-01-01

    The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language

  13. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-01-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  14. MINORITY LANGUAGES IN ESTONIAN SEGREGATIVE LANGUAGE ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    Elvira Küün

    2011-01-01

    Full Text Available The goal of this project in Estonia was to determine what languages are spoken by students from the 2nd to the 5th year of basic school at their homes in Tallinn, the capital of Estonia. At the same time, this problem was also studied in other segregated regions of Estonia: Kohtla-Järve and Maardu. According to the database of the population census from the year 2000 (Estonian Statistics Executive Office's census 2000, there are representatives of 142 ethnic groups living in Estonia, speaking a total of 109 native languages. At the same time, the database doesn’t state which languages are spoken at homes. The material presented in this article belongs to the research topic “Home Language of Basic School Students in Tallinn” from years 2007–2008, specifically financed and ordered by the Estonian Ministry of Education and Research (grant No. ETF 7065 in the framework of an international study called “Multilingual Project”. It was determined what language is dominating in everyday use, what are the factors for choosing the language for communication, what are the preferred languages and language skills. This study reflects the actual trends of the language situation in these cities.

  15. Value of Web-based learning activities for nursing students who speak English as a second language.

    Science.gov (United States)

    Koch, Jane; Salamonson, Yenna; Du, Hui Yun; Andrew, Sharon; Frost, Steven A; Dunncliff, Kirstin; Davidson, Patricia M

    2011-07-01

    There is an increasing need to address the educational needs of students with English as a second language. The authors assessed the value of a Web-based activity to meet the needs of students with English as a second language in a bioscience subject. Using telephone contact, we interviewed 21 Chinese students, 24 non-Chinese students with English as a second language, and 7 native English-speaking students to identify the perception of the value of the intervention. Four themes emerged from the qualitative data: (1) Language is a barrier to achievement and affects self-confidence; (2) Enhancement intervention promoted autonomous learning; (3) Focusing on the spoken word increases interaction capacity and self-confidence; (4) Assessment and examination drive receptivity and sense of importance. Targeted strategies to promote language acculturation and acquisition are valued by students. Linking language acquisition skills to assessment tasks is likely to leverage improvements in competence. Copyright 2011, SLACK Incorporated.

  16. Notes from the Field: Lolak--Another Moribund Language of Indonesia, with Supporting Audio

    Science.gov (United States)

    Lobel, Jason William; Paputungan, Ade Tatak

    2017-01-01

    This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…

  17. Expressing Identity through Lesser-Used Languages: Examples from the Irish and Galician Contexts

    Science.gov (United States)

    O'Rourke, Bernadette

    2005-01-01

    This paper looks at the degree and way in which lesser-used languages are used as expressions of identity, focusing specifically on two of Europe's lesser-used languages. The first is Irish, spoken in the Republic of Ireland and the second is Galician, spoken in the Autonomous Community of Galicia in the North-western part of Spain. The paper…

  18. Bridging the Gap: The Development of Appropriate Educational Strategies for Minority Language Communities in the Philippines

    Science.gov (United States)

    Dekker, Diane; Young, Catherine

    2005-01-01

    There are more than 6000 languages spoken by the 6 billion people in the world today--however, those languages are not evenly divided among the world's population--over 90% of people globally speak only about 300 majority languages--the remaining 5700 languages being termed "minority languages". These languages represent the…

  19. Teaching English as a "Second Language" in Kenya and the United States: Convergences and Divergences

    Science.gov (United States)

    Roy-Campbell, Zaline M.

    2015-01-01

    English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…

  20. Electronic interaction in the teaching of Portuguese as an additional foreign language: the optimization of beginners’ learning experience

    Directory of Open Access Journals (Sweden)

    Lucia Rottava

    2013-01-01

    Full Text Available This article discusses language learners’ interaction in a virtual learning environment, contemplating two interconnected aspects: the possibilities of interaction through the use of electronic resources and the configuration of teaching and learning of Portuguese as an additional language (AL. The objective is to analyse learners of Portuguese-AL’s oral and written production. The data analysed was produced in a virtual environment. The results suggest some aspects that electronic resources optimise: genuine interaction; situations in which learners risk more whist producing written and speaking work in the FL, connecting to what happens in everyday life; learning develops beyond the context of the classroom, since learners receive feedback from the tutor about language forms and uses beyond the syllabus of the course; and the granting of autonomy to learners as they can systematise their learning process.