Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.
Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…
Jacques Melitz; Farid Toubal
We construct new series for common native language and common spoken language for 195 countries, which we use together with series for common official language and linguis-tic proximity in order to draw inferences about (1) the aggregate impact of all linguistic factors on bilateral trade, (2) whether the linguistic influences come from ethnicity and trust or ease of communication, and (3) in so far they come from ease of communication, to what extent trans-lation and interpreters play a role...
Bedny, Marina; Richardson, Hilary; Saxe, Rebecca
Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.
Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.
Spoken language corpora for the nine official African languages of South Africa. Jens Allwood, AP Hendrikse. Abstract. In this paper we give an outline of a corpus planning project which aims to develop linguistic resources for the nine official African languages of South Africa in the form of corpora, more specifically spoken ...
Crowe, Kathryn; McLeod, Sharynne
The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…
Nicodemus, Brenda; Emmorey, Karen
Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…
Mast, Marion; Maier, Elisabeth; Schmitz, Birte
This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...
This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…
Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…
Assessing spoken-language educational interpreting: Measuring up and measuring right. Lenelle Foster, Adriaan Cupido. Abstract. This article, primarily, presents a critical evaluation of the development and refinement of the assessment instrument used to assess formally the spoken-language educational interpreters at ...
Apr 12, 2018 ... sound of that language. These language-specific properties can be exploited to identify a spoken language reliably. Automatic language identification has emerged as a prominent research area in. Indian languages processing. People from different regions of India speak around 800 different languages.
Parisse , Christophe; Le Normand , Marie-Thérèse
International audience; The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automa...
Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda
.... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...
The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...
Full Text Available This article introduces the first Spoken Language Identification system developed to distinguish among all eleven of South Africa’s official languages. The PPR-LM (Parallel Phoneme Recognition followed by Language Modeling) architecture...
assessment instrument used to assess formally the spoken-language educational interpreters at. Stellenbosch University (SU). Research ..... Is the interpreter suited to the module? Is the interpreter easier to follow? Technical. Microphone technique. Lag. Completeness. Language use. Vocabulary. Role. Personal Objectives ...
This article examines the impact of the hegemony of English, as a common lingua franca, referred to as a global language, on the indigenous languages spoken in Nigeria. Since English, through the British political imperialism and because of the economic supremacy of English dominated countries, has assumed the ...
Huettig, Falk; Brouwer, Susanne
It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.
Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois
The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.
Full Text Available the carefully selected training data used to construct the system initially. The authors investigated the process of porting a Spoken Language Identification (S-LID) system to a new environment and describe methods to prepare it for more effective use...
Parisse, C; Le Normand, M T
The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automatic disambiguation of lexical tags. The tool presented here (POST) can tag and disambiguate a large text in a few seconds. This tool complements systems dealing with language transcription and suggests further theoretical developments in the assessment of the status of morphosyntax in spoken language corpora. The program currently works for French and English, but it can be easily adapted for use with other languages. The analysis and computation of a corpus produced by normal French children 2-4 years of age, as well as of a sample corpus produced by French SLI children, are given as examples.
Jungheim, M; Miller, S; Kühn, D; Ptok, M
In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.
Locke, John L.
A major synthesis of the latest research on early language acquisition, this book explores what gives infants the remarkable capacity to progress from babbling to meaningful sentences, and what inclines a child to speak. The book examines the neurological, perceptual, social, and linguistic aspects of language acquisition in young children, from…
Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.
Full Text Available rates when no Japanese acoustic models are constructed. An increasing amount of Japanese training data is used to train the language classifier of an English-only (E), an English-French (EF), and an English-French-Portuguese PPR system. ple.... Experimental design 3.1. Corpora Because of their role as world languages that are widely spoken in Africa, our initial LID system was designed to distinguish between English, French and Portuguese. We therefore trained phone recognizers and language...
Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann
Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…
Davidson, Kathryn; Lillo-Martin, Diane; Chen Pichler, Deborah
Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken En...
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language
Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin
Moeller, Aleidine J.; Theiler, Janine
Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…
Office of English Language Acquisition, US Department of Education, 2015
The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…
Nippold, Marilyn A; Frantz-Kaspar, Megan W; Vigeland, Laura M
In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language impairment. Forty adults (mean age = 22 years, 10 months) with typical language development participated in an interview that consisted of 3 speaking tasks: a general conversation about common, everyday topics; a narrative retelling task that involved fables; and a question-and-answer, critical-thinking task about the fables. Each speaker's interview was audio-recorded, transcribed, broken into communication units, coded for main and subordinate clauses, entered into Systematic Analysis of Language Transcripts (Miller, Iglesias, & Nockerts, 2004), and analyzed for mean length of communication unit and clausal density. Both the narrative and critical-thinking tasks elicited significantly greater syntactic complexity than the conversational task. It was also found that syntactic complexity was significantly greater during the narrative task than the critical-thinking task. Syntactic complexity was best revealed by a narrative task that involved fables. The study offers benchmarks for language development during early adulthood.
Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M
The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.
Barberà, Gemma; Zwets, Martine
In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…
Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…
Thothathiri, Malathi; Snedeker, Jesse
Syntactic priming during language production is pervasive and well-studied. Hearing, reading, speaking or writing a sentence with a given structure increases the probability of subsequently producing the same structure, regardless of whether the prime and target share lexical content. In contrast, syntactic priming during comprehension has proven more elusive, fueling claims that comprehension is less dependent on general syntactic representations and more dependent on lexical knowledge. In three experiments we explored syntactic priming during spoken language comprehension. Participants acted out double-object (DO) or prepositional-object (PO) dative sentences while their eye movements were recorded. Prime sentences used different verbs and nouns than the target sentences. In target sentences, the onset of the direct-object noun was consistent with both an animate recipient and an inanimate theme, creating a temporary ambiguity in the argument structure of the verb (DO e.g., Show the horse the book; PO e.g., Show the horn to the dog). We measured the difference in looks to the potential recipient and the potential theme during the ambiguous interval. In all experiments, participants who heard DO primes showed a greater preference for the recipient over the theme than those who heard PO primes, demonstrating across-verb priming during online language comprehension. These results accord with priming found in production studies, indicating a role for abstract structural information during comprehension as well as production.
Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae
Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.
BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.
Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken
Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung
Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.
Remington, Robert J.
Leaders within the Information Technology (IT) industry are expressing a general concern that the products used to deliver and manage today's communications network capabilities require far too much effort to learn and to use, even by highly skilled and increasingly scarce support personnel. The usability of network management systems must be significantly improved if they are to deliver the performance and quality of service needed to meet the ever-increasing demand for new Internet-based information and services. Fortunately, recent advances in spoken language (SL) interface technologies show promise for significantly improving the usability of most interactive IT applications, including network management systems. The emerging SL interfaces will allow users to communicate with IT applications through words and phases -- our most familiar form of everyday communication. Recent advancements in SL technologies have resulted in new commercial products that are being operationally deployed at an increasing rate. The present paper describes a project aimed at the application of new SL interface technology for improving the usability of an advanced network management system. It describes several SL interface features that are being incorporated within an existing system with a modern graphical user interface (GUI), including 3-D visualization of network topology and network performance data. The rationale for using these SL interface features to augment existing user interfaces is presented, along with selected task scenarios to provide insight into how a SL interface will simplify the operator's task and enhance overall system usability.
Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…
To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently
Marchman, Virginia A; Fernald, Anne; Hurtado, Nereyda
Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children's facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children's ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language.
Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric
WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.
Gagliardi, Ann C.
This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from…
Shaw, Emily P.
This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…
Atkinson, Mark; Kirby, Simon; Smith, Kenny
A learner's linguistic input is more variable if it comes from a greater number of speakers. Higher speaker input variability has been shown to facilitate the acquisition of phonemic boundaries, since data drawn from multiple speakers provides more information about the distribution of phonemes in a speech community. It has also been proposed that speaker input variability may have a systematic influence on individual-level learning of morphology, which can in turn influence the group-level characteristics of a language. Languages spoken by larger groups of people have less complex morphology than those spoken in smaller communities. While a mechanism by which the number of speakers could have such an effect is yet to be convincingly identified, differences in speaker input variability, which is thought to be larger in larger groups, may provide an explanation. By hindering the acquisition, and hence faithful cross-generational transfer, of complex morphology, higher speaker input variability may result in structural simplification. We assess this claim in two experiments which investigate the effect of such variability on language learning, considering its influence on a learner's ability to segment a continuous speech stream and acquire a morphologically complex miniature language. We ultimately find no evidence to support the proposal that speaker input variability influences language learning and so cannot support the hypothesis that it explains how population size determines the structural properties of language.
Full Text Available A learner's linguistic input is more variable if it comes from a greater number of speakers. Higher speaker input variability has been shown to facilitate the acquisition of phonemic boundaries, since data drawn from multiple speakers provides more information about the distribution of phonemes in a speech community. It has also been proposed that speaker input variability may have a systematic influence on individual-level learning of morphology, which can in turn influence the group-level characteristics of a language. Languages spoken by larger groups of people have less complex morphology than those spoken in smaller communities. While a mechanism by which the number of speakers could have such an effect is yet to be convincingly identified, differences in speaker input variability, which is thought to be larger in larger groups, may provide an explanation. By hindering the acquisition, and hence faithful cross-generational transfer, of complex morphology, higher speaker input variability may result in structural simplification. We assess this claim in two experiments which investigate the effect of such variability on language learning, considering its influence on a learner's ability to segment a continuous speech stream and acquire a morphologically complex miniature language. We ultimately find no evidence to support the proposal that speaker input variability influences language learning and so cannot support the hypothesis that it explains how population size determines the structural properties of language.
de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo
Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hampton, L. H.; Kaiser, A. P.
Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…
João Mendonça Correia
Full Text Available Spoken word recognition and production require fast transformations between acoustic, phonological and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g. ‘paard’-‘horse’. Multivariate pattern analysis (MVPA was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination and generalize meaning across two languages (across-language generalization. Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in
Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.
Meakins, Felicity; Wigglesworth, Gillian
In situations of language endangerment, the ability to understand a language tends to persevere longer than the ability to speak it. As a result, the possibility of language revival remains high even when few speakers remain. Nonetheless, this potential requires that those with high levels of comprehension received sufficient input as children for…
Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa
Nicholas, Johanna G.; Geers, Ann E.
Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…
Pisoni, David B.
This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…
Carrero Pérez, Nubia Patricia
Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…
McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard
Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…
Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José
Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…
Jul 1, 2009 ... correct language that has been acquired through listening. The Brewsters17 suggest an 'immersion experience' by living with speakers of the language. Ellis included several of their tools, such as loop tapes, as being useful in a consultation when learning a language.15 Others disagree with a purely.
Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna
Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in each group achieved benchmarks for the first stage of functional spoken language development, as defined by Tager-Flusberg et al. (J Speech Lang Hear Res, 52: 643-652, 2009). Analyses of moderators of treatment suggest that joint attention moderates response to both treatments, and children with better receptive language pre-treatment do better with the naturalistic method, while those with lower receptive language show better response to the discrete trial treatment. The implications of these findings are discussed.
Vaughn, Charlotte R; Bradlow, Ann R
While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3). Results demonstrate that language-being-spoken is integrated in processing with each of the other dimensions tested, and that these processing dependencies seem to be independent of listeners' bilingual status or experience with the languages tested. Moreover, the data reveal processing interference asymmetries, suggesting a processing hierarchy for indexical, non-linguistic speech features.
Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre
To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. In Experiment 1, 69 children with TLD (7-10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7-12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070
Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity
Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid
Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple
Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters
Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to
Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.
Eisenberg, Laurie S; Fisher, Laurel M; Johnson, Karen C; Ganguly, Dianne Hammes; Grace, Thelma; Niparko, John K
We investigated associations between sentence recognition and spoken language for children with cochlear implants (CI) enrolled in the Childhood Development after Cochlear Implantation (CDaCI) study. In a prospective longitudinal study, sentence recognition percent-correct scores and language standard scores were correlated at 48-, 60-, and 72-months post-CI activation. Six tertiary CI centers in the United States. Children with CIs participating in the CDaCI study. Cochlear implantation. Sentence recognition was assessed using the Hearing In Noise Test for Children (HINT-C) in quiet and at +10, +5, and 0 dB signal-to-noise ratio (S/N). Spoken language was assessed using the Clinical Assessment of Spoken Language (CASL) core composite and the antonyms, paragraph comprehension (syntax comprehension), syntax construction (expression), and pragmatic judgment tests. Positive linear relationships were found between CASL scores and HINT-C sentence scores when the sentences were delivered in quiet and at +10 and +5 dB S/N, but not at 0 dB S/N. At 48 months post-CI, sentence scores at +10 and +5 dB S/N were most strongly associated with CASL antonyms. At 60 and 72 months, sentence recognition in noise was most strongly associated with paragraph comprehension and syntax construction. Children with CIs learn spoken language in a variety of acoustic environments. Despite the observed inconsistent performance in different listening situations and noise-challenged environments, many children with CIs are able to build lexicons and learn the rules of grammar that enable recognition of sentences.
Wilang, Jeffrey Dawala; Sinwongsuwat, Kemtong
This year is designated as Thailand's "English Speaking Year" with the aim of improving the communicative competence of Thais for the upcoming integration of the Association of Southeast Asian Nations (ASEAN) in 2015. The consistent low-level proficiency of the Thais in the English language has led to numerous curriculum revisions and…
le Fevre Jakobsen, Bjarne
with well-edited material, in 1965, to an anchor who hands over to journalists in live feeds from all over the world via satellite, Skype, or mobile telephone, in 2011. The narrative rhythm is faster and sometimes more spontaneous. In this article we will discuss aspects of the use of language and the tempo...
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
Maldonado Torres, Sonia Enid
The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…
Paladino, Jonathan D; Crooke, Philip S; Brackney, Christopher R; Kaynar, A Murat; Hotchkiss, John R
Medical care commonly involves the apprehension of complex patterns of patient derangements to which the practitioner responds with patterns of interventions, as opposed to single therapeutic maneuvers. This complexity renders the objective assessment of practice patterns using conventional statistical approaches difficult. Combinatorial approaches drawn from symbolic dynamics are used to encode the observed patterns of patient derangement and associated practitioner response patterns as sequences of symbols. Concatenating each patient derangement symbol with the contemporaneous practitioner response symbol creates "words" encoding the simultaneous patient derangement and provider response patterns and yields an observed vocabulary with quantifiable statistical characteristics. A fundamental observation in many natural languages is the existence of a power law relationship between the rank order of word usage and the absolute frequency with which particular words are uttered. We show that population level patterns of patient derangement: practitioner intervention word usage in two entirely unrelated domains of medical care display power law relationships similar to those of natural languages, and that-in one of these domains-power law behavior at the population level reflects power law behavior at the level of individual practitioners. Our results suggest that patterns of medical care can be approached using quantitative linguistic techniques, a finding that has implications for the assessment of expertise, machine learning identification of optimal practices, and construction of bedside decision support tools.
Bang, Janet; Nadig, Aparna
It is well established that children with typical development (TYP) exposed to more maternal linguistic input develop larger vocabularies. We know relatively little about the linguistic environment available to children with autism spectrum disorders (ASD), and whether input contributes to their later vocabulary. Children with ASD or TYP and their mothers from English and French-speaking families engaged in a 10 min free-play interaction. To compare input, children were matched on language ability, sex, and maternal education (ASD n = 20, TYP n = 20). Input was transcribed, and the number of word tokens and types, lexical diversity (D), mean length of utterances (MLU), and number of utterances were calculated. We then examined the relationship between input and children's spoken vocabulary 6 months later in a larger sample (ASD: n = 19, 50-85 months; TYP: n = 44, 25-58 months). No significant group differences were found on the five input features. A hierarchical multiple regression model demonstrated input MLU significantly and positively contributed to spoken vocabulary 6 months later in both groups, over and above initial language levels. No significant difference was found between groups in the slope between input MLU and later vocabulary. Our findings reveal children with ASD and TYP of similar language levels are exposed to similar maternal linguistic environments regarding number of word tokens and types, D, MLU, and number of utterances. Importantly, linguistic input accounted for later vocabulary growth in children with ASD. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Williams, Joshua T.; Newman, Sharlene D.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…
This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
Full Text Available Language technologies, in particular machine translation applications, have the potential to help break down linguistic and cultural barriers, presenting an important contribution to the globalization and internationalization of the Portuguese language, by allowing content to be shared 'from' and 'to' this language. This article aims to present the research work developed at the Laboratory of Spoken Language Systems of INESC-ID in the field of machine translation, namely the automated speech translation, the translation of microblogs and the creation of a hybrid machine translation system. We will focus on the creation of the hybrid system, which aims at combining linguistic knowledge, in particular semantico-syntactic knowledge, with statistical knowledge, to increase the level of translation quality.
Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J
This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.
Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R
Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.
Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783
Hirschmüller, Sarah; Egloff, Boris
How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.
Full Text Available How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a positive emotion word usage base rates in spoken and written materials and (b positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.
Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G
This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Brennan-Jones, Christopher G; White, Jo; Rush, Robert W; Law, James
Congenital or early-acquired hearing impairment poses a major barrier to the development of spoken language and communication. Early detection and effective (re)habilitative interventions are essential for parents and families who wish their children to achieve age-appropriate spoken language. Auditory-verbal therapy (AVT) is a (re)habilitative approach aimed at children with hearing impairments. AVT comprises intensive early intervention therapy sessions with a focus on audition, technological management and involvement of the child's caregivers in therapy sessions; it is typically the only therapy approach used to specifically promote avoidance or exclusion of non-auditory facial communication. The primary goal of AVT is to achieve age-appropriate spoken language and for this to be used as the primary or sole method of communication. AVT programmes are expanding throughout the world; however, little evidence can be found on the effectiveness of the intervention. To assess the effectiveness of auditory-verbal therapy (AVT) in developing receptive and expressive spoken language in children who are hearing impaired. CENTRAL, MEDLINE, EMBASE, PsycINFO, CINAHL, speechBITE and eight other databases were searched in March 2013. We also searched two trials registers and three theses repositories, checked reference lists and contacted study authors to identify additional studies. The review considered prospective randomised controlled trials (RCTs) and quasi-randomised studies of children (birth to 18 years) with a significant (≥ 40 dBHL) permanent (congenital or early-acquired) hearing impairment, undergoing a programme of auditory-verbal therapy, administered by a certified auditory-verbal therapist for a period of at least six months. Comparison groups considered for inclusion were waiting list and treatment as usual controls. Two review authors independently assessed titles and abstracts identified from the searches and obtained full-text versions of all potentially
McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard
Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Full Text Available In the written texts on foreign language learning, a group of studies has stressed the function of learning context and learning chances for learners’ language input. The present thesis had two main goals: on the one hand, different types of input to which Iranian grade four high school EFL learners’ are exposed were looked at; on the other hand, the possible relationship between types and quantity of input and Iranian EFL learners’ oral proficiency was investigated. It was supposed that EFL learners who have access to more input will show better oral proficiency than those who do not have. Instruments used in the present study for the purpose of data collation included PET test, researcher- made questionnaire, oral language proficiency test and face- to -face interview. Data were gathered from 50 Iranian female grade four high school foreign language learners who were selected from among 120 students whose score on PET test were +1SD from the mean score. The results of the Spearman rank –order correlation test for the types of input and oral language proficiency scores, showed that the participants’ oral proficiency score significantly correlated with the intended four sources of input including spoken (rho= 0.416, sig=0.003, written (rho= 0.364, sig=0.009, aural (rho= 0.343, sig=0.015 and visual or audio-visual types of input (rho= 0.47, sig=0.00. The findings of Spearman rank –order correlation test for the quantity of input and oral language proficiency scores also showed a significant relationship between quantity of input and oral language proficiency (rho= 0.543, sig= 0.00. The findings showed that EFL learners’ oral proficiency is significantly correlated with efficient and effective input. The findings may also suggest answers to the question why most Iranian English learners fail to speak English fluently, which might be due to lack of effective input. This may emphasize the importance of the types and quantity of
Bree, Elise; Verhagen, Josje; Kerkhoff, Annemarie; Doedens, Willemijn; Unsworth, Sharon
This study examines novel language learning from inconsistent input in monolingual and bilingual toddlers. We predicted an advantage for the bilingual toddlers on the basis of the structural sensitivity hypothesis. Monolingual and bilingual 24-month-olds performed two novel language learning experiments. The first contained consistent input, and…
Liebenthal, Einat; Silbersweig, David A; Stern, Emily
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
Feghali, Maksoud N.
This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…
Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias
Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.
Doshi, Finale; Roy, Nicholas
Spoken language is one of the most intuitive forms of interaction between humans and agents. Unfortunately, agents that interact with people using natural language often experience communication errors and do not correctly understand the user's intentions. Recent systems have successfully used probabilistic models of speech, language and user behaviour to generate robust dialogue performance in the presence of noisy speech recognition and ambiguous language choices, but decisions made using these probabilistic models are still prone to errors owing to the complexity of acquiring and maintaining a complete model of human language and behaviour. In this paper, a decision-theoretic model for human-robot interaction using natural language is described. The algorithm is based on the Partially Observable Markov Decision Process (POMDP), which allows agents to choose actions that are robust not only to uncertainty from noisy or ambiguous speech recognition but also unknown user models. Like most dialogue systems, a POMDP is defined by a large number of parameters that may be difficult to specify a priori from domain knowledge, and learning these parameters from the user may require an unacceptably long training period. An extension to the POMDP model is described that allows the agent to acquire a linguistic model of the user online, including new vocabulary and word choice preferences. The approach not only avoids a training period of constant questioning as the agent learns, but also allows the agent actively to query for additional information when its uncertainty suggests a high risk of mistakes. The approach is demonstrated both in simulation and on a natural language interaction system for a robotic wheelchair application.
Ramos-Sanchez, Jose Luis; Cuadrado-Gordillo, Isabel
This article presents the results of a quasi-experimental study of whether there exists a causal relationship between spoken language and the initial learning of reading/writing. The subjects were two matched samples each of 24 preschool pupils (boys and girls), controlling for certain relevant external variables. It was found that there was no…
Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie
Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…
Peters, Sara A; Boiteau, Timothy W; Almor, Amit
The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis.
Full Text Available The research and development of the Slovak spoken language dialogue system (SLDS is described in the paper. The dialogue system is based on the DARPA Communicator architecture and was developed in the period from July 2003 to June 2006. It consists of the Galaxy hub and telephony, automatic speech recognition, text-to-speech, backend, transport and VoiceXML dialogue management and automatic evaluation modules. The dialogue system is demonstrated and tested via two pilot applications, „Weather Forecast“ and „Public Transport Timetables“. The required information is retrieved from Internet resources in multi-user mode through PSTN, ISDN, GSM and/or VoIP network. Some innovation development has been performed since 2006 which is also described in the paper.
Rubin, H; Kantor, M; Macnab, J
Experiments examined grammatical judgement, and error-identification deficits in relation to expressive language skills and to morphemic errors in writing. Language-disabled subjects did not differ from language-matched controls on judgement, revision, or error identification. Age-matched controls represented more morphemes in elicited writing than either of the other groups, which were equivalent. However, in spontaneous writing, language-disabled subjects made more frequent morphemic errors than age-matched controls, but language-matched subjects did not differ from either group. Proficiency relative to academic experience and oral language status and to remedial implications are discussed.
Choroomi, S; Curotta, J
To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.
Jensen, Eva Dam; Vinther, Thora
Reports on two studies on input enhancement used to support learners' selection of focus of attention in Spanish second language listening material. Input consisted of video recordings of dialogues between native speakers. Exact repetition and speech rate reduction were examined for effect on comprehension, acquisition of decoding strategies, and…
Onovughe, Ofodu Graceful
Sociolinguistic inputs in language acquisition and use of English as Second Language in classrooms is the main focus of this study. A survey research design was adopted. The population consisted of all secondary school students in Akure Local Government of Ondo State, Nigeria. Two hundred and forty (240) students in senior secondary school classes…
Plante, Elena; Vance, Rebecca; Moody, Amanda; Gerken, LouAnn
Purpose: Children learning language conceptualize the nature of input they receive in ways that allow them to understand and construct utterances they have never heard before. This study was designed to illuminate the types of information children with and without specific language impairment (SLI) focus on to develop their conceptualizations and…
Courtin, Cyril; Jobard, Gael; Vigneau, Mathieu; Beaucousin, Virginie; Razafimandimby, Annick; Hervé, Pierre-Yves; Mellet, Emmanuel; Zago, Laure; Petit, Laurent; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie
We used functional magnetic resonance imaging to investigate the areas activated by signed narratives in non-signing subjects naïve to sign language (SL) and compared it to the activation obtained when hearing speech in their mother tongue. A subset of left hemisphere (LH) language areas activated when participants watched an audio-visual narrative in their mother tongue was activated when they observed a signed narrative. The inferior frontal (IFG) and precentral (Prec) gyri, the posterior parts of the planum temporale (pPT) and of the superior temporal sulcus (pSTS), and the occipito-temporal junction (OTJ) were activated by both languages. The activity of these regions was not related to the presence of communicative intent because no such changes were observed when the non-signers watched a muted video of a spoken narrative. Recruitment was also not triggered by the linguistic structure of SL, because the areas, except pPT, were not activated when subjects listened to an unknown spoken language. The comparison of brain reactivity for spoken and sign languages shows that SL has a special status in the brain compared to speech; in contrast to unknown oral language, the neural correlates of SL overlap LH speech comprehension areas in non-signers. These results support the idea that strong relationships exist between areas involved in human action observation and language, suggesting that the observation of hand gestures have shaped the lexico-semantic language areas as proposed by the motor theory of speech. As a whole, the present results support the theory of a gestural origin of language. Copyright © 2010 Elsevier Inc. All rights reserved.
Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B
Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures
Werfel, Krystal L.
Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…
Auditory-Verbal Therapy (AVT) is an effective early intervention for children with hearing loss. The Hear and Say Centre in Brisbane offers AVT sessions to families soon after diagnosis, and about 20% of the families in Queensland participate via PC-based videoconferencing (Skype). Parent and therapist satisfaction with the telemedicine sessions was examined by questionnaire. All families had been enrolled in the telemedicine AVT programme for at least six months. Their average distance from the Hear and Say Centre was 600 km. Questionnaires were completed by 13 of the 17 parents and all five therapists. Parents and therapists generally expressed high satisfaction in the majority of the sections of the questionnaire, e.g. most rated the audio and video quality as good or excellent. All parents felt comfortable or as comfortable as face-to-face when discussing matters with the therapist online, and were satisfied or as satisfied as face-to-face with their level and their child's level of interaction/rapport with the therapist. All therapists were satisfied or very satisfied with the telemedicine AVT programme. The results demonstrate the potential of telemedicine service delivery for teaching listening and spoken language to children with hearing loss in rural and remote areas of Australia.
Kasyidi, Fatan; Puji Lestari, Dessi
One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.
Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners
Petkov, Christopher I; Jarvis, Erich D
Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.
Petkov, Christopher I.; Jarvis, Erich D.
Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species. PMID:22912615
Scott, C M; Windsor, J
Language performance in naturalistic contexts can be characterized by general measures of productivity, fluency, lexical diversity, and grammatical complexity and accuracy. The use of such measures as indices of language impairment in older children is open to questions of method and interpretation. This study evaluated the extent to which 10 general language performance measures (GLPM) differentiated school-age children with language learning disabilities (LLD) from chronological-age (CA) and language-age (LA) peers. Children produced both spoken and written summaries of two educational videotapes that provided models of either narrative or expository (informational) discourse. Productivity measures, including total T-units, total words, and words per minute, were significantly lower for children with LLD than for CA children. Fluency (percent T-units with mazes) and lexical diversity (number of different words) measures were similar for all children. Grammatical complexity as measured by words per T-unit was significantly lower for LLD children. However, there was no difference among groups for clauses per T-unit. The only measure that distinguished children with LLD from both CA and LA peers was the extent of grammatical error. Effects of discourse genre and modality were consistent across groups. Compared to narratives, expository summaries were shorter, less fluent (spoken versions), more complex (words per T-unit), and more error prone. Written summaries were shorter and had more errors than spoken versions. For many LLD and LA children, expository writing was exceedingly difficult. Implications for accounts of language impairment in older children are discussed.
Leni Amalia Suek
Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.
Adank, P.M.; Noordzij, M.L.; Hagoort, P.
A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a
Hardison, Debra M.
The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…
Au, Terry Kit-Fong; Chan, Winnie Wailan; Cheng, Liao; Siegel, Linda S; Tso, Ricky Van Yip
To fully acquire a language, especially its phonology, children need linguistic input from native speakers early on. When interaction with native speakers is not always possible - e.g. for children learning a second language that is not the societal language - audios are commonly used as an affordable substitute. But does such non-interactive input work? Two experiments evaluated the usefulness of audio storybooks in acquiring a more native-like second-language accent. Young children, first- and second-graders in Hong Kong whose native language was Cantonese Chinese, were given take-home listening assignments in a second language, either English or Putonghua Chinese. Accent ratings of the children's story reading revealed measurable benefits of non-interactive input from native speakers. The benefits were far more robust for Putonghua than English. Implications for second-language accent acquisition are discussed.
Dijkstra, J.; Kuiken, F.; Jorna, R.J.; Klinkenberg, E.L.
The current longitudinal study investigated the role of home language and outside home exposure in the development of Dutch and Frisian vocabulary by young bilinguals. Frisian is a minority language spoken in the north of the Netherlands. In three successive test rounds, 91 preschoolers were tested
Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.
This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often
Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.
This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often
Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J
In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.
McDuffie, Andrea; Banasik, Amy; Bullard, Lauren; Nelson, Sarah; Feigles, Robyn Tempero; Hagerman, Randi; Abbeduto, Leonard
A small randomized group design (N = 20) was used to examine a parent-implemented intervention designed to improve the spoken language skills of school-aged and adolescent boys with FXS, the leading cause of inherited intellectual disability. The intervention was implemented by speech-language pathologists who used distance video-teleconferencing to deliver the intervention. The intervention taught mothers to use a set of language facilitation strategies while interacting with their children in the context of shared story-telling. Treatment group mothers significantly improved their use of the targeted intervention strategies. Children in the treatment group increased the duration of engagement in the shared story-telling activity as well as use of utterances that maintained the topic of the story. Children also showed increases in lexical diversity, but not in grammatical complexity.
The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
KRISHNAMURTHI, M.G.; MCCORMACK, WILLIAM
THE TWENTY GRADED UNITS IN THIS TEXT CONSTITUTE AN INTRODUCTION TO BOTH INFORMAL AND FORMAL SPOKEN KANNADA. THE FIRST TWO UNITS PRESENT THE KANNADA MATERIAL IN PHONETIC TRANSCRIPTION ONLY, WITH KANNADA SCRIPT GRADUALLY INTRODUCED FROM UNIT III ON. A TYPICAL LESSON-UNIT INCLUDES--(1) A DIALOG IN PHONETIC TRANSCRIPTION AND ENGLISH TRANSLATION, (2)…
Adank, P.M.; Noordzij, M.L.; Hagoort, P.
A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and
This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…
Bahrani, Taher; Sim, Tam Shu
In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…
Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony
Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.
Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene
Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.
Perniss, Pamela; Lu, Jenny C; Morgan, Gary; Vigliocco, Gabriella
Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on iconicity in language, that is, resemblance relationships between form and meaning, and on non-ostensive contexts, where label and referent do not co-occur. We approach the question of language learning from the perspective of the language input. Specifically, we look at child-directed language (CDL) in British Sign Language (BSL), a language rich in iconicity due to the affordances of the visual modality. We ask whether child-directed signing exploits iconicity in the language by highlighting the similarity mapping between form and referent. We find that CDL modifications occur more often with iconic signs than with non-iconic signs. Crucially, for iconic signs, modifications are more frequent in non-ostensive contexts than in ostensive contexts. Furthermore, we find that pointing dominates in ostensive contexts, and suggest that caregivers adjust the semiotic resources recruited in CDL to context. These findings offer first evidence for a role of iconicity in the language input and suggest that iconicity may be involved in referential mapping and language learning, particularly in non-ostensive contexts. © 2017 John Wiley & Sons Ltd.
Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.
Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides
Geytenbeek, J.J.M.; Vermeulen, R.J.; Becher, J.G.; Oostrom, K.J.
Aim: To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Method: Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic
De Angelis, Gessica
The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…
Li, Xiao-qing; Ren, Gui-qin
An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…
Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.
Gautreau, Aurore; Hoen, Michel; Meunier, Fanny
This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.
Snow, Catherine E
In the early years of the Journal of Child Language, there was considerable disagreement about the role of language input or adult-child interaction in children's language acquisition. The view that quantity and quality of input to language-learning children is relevant to their language development has now become widely accepted as a principle guiding advice to parents and the design of early childhood education programs, even if it is not yet uncontested in the field of language development. The focus on variation in the language input to children acquires particular educational relevance when we consider variation in access to academic language - features of language particularly valued in school and related to success in reading and writing. Just as many children benefit from language environments that are intentionally designed to ensure adequate quantity and quality of input, even more probably need explicit instruction in the features of language that characterize its use for academic purposes.
Jansen, Stefanie; Wesselmeier, Hendrik; de Ruiter, Jan P; Mueller, Horst M
Even though research in turn-taking in spoken dialogues is now abundant, a typical EEG-signature associated with the anticipation of turn-ends has not yet been identified until now. The purpose of this study was to examine if readiness potentials (RP) can be used to study the anticipation of turn-ends by using it in a motoric finger movement and articulatory movement task. The goal was to determine the preconscious onset of turn-end anticipation in early, preconscious turn-end anticipation processes by the simultaneous registration of EEG measures (RP) and behavioural measures (anticipation timing accuracy, ATA). For our behavioural measures, we used both button-press and verbal response ("yes"). In the experiment, 30 subjects were asked to listen to auditorily presented utterances and press a button or utter a brief verbal response when they expected the end of the turn. During the task, a 32-channel-EEG signal was recorded. The results showed that the RPs during verbal- and button-press-responses developed similarly and had an almost identical time course: the RP signals started to develop 1170 vs. 1190 ms before the behavioural responses. Until now, turn-end anticipation is usually studied using behavioural methods, for instance by measuring the anticipation timing accuracy, which is a measurement that reflects conscious behavioural processes and is insensitive to preconscious anticipation processes. The similar time course of the recorded RP signals for both verbal- and button-press responses provide evidence for the validity of using RPs as an online marker for response preparation in turn-taking and spoken dialogue research. Copyright © 2014 Elsevier B.V. All rights reserved.
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from fiscal...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from federal...
Coplan, Robert J.; Weeks, Murray
The goal of this study was to examine the moderating role of pragmatic language in the relations between shyness and indices of socio-emotional adjustment in an unselected sample of early elementary school children. In particular, we sought to explore whether pragmatic language played a protective role for shy children. Participants were n = 167…
Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán
Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…
Methods. Qualitative individual interviews were conducted with seven doctors who had successfully learned the language of their patients, to determine their experiences and how they had succeeded. Results. All seven doctors used a combination of methods to learn the language. Listening was found to be very important, ...
This paper describes a study comparing chatroom and face-to-face oral interaction for the purposes of language learning in a tertiary classroom in the United Arab Emirates. It uses transcripts analysed for Language Related Episodes, collaborative dialogues, thought to be externally observable examples of noticing in action. The analysis is…
Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…
Cha, Kijoo; Goldenberg, Claude
This study examined how emergent bilingual children's English and Spanish proficiencies moderated the relationships between Spanish and English input at home (bilingual home language input [BHLI]) and children's oral language skills in each language. The sample comprised over 1,400 Spanish-dominant kindergartners in California and Texas. BHLI was…
Harris, David; Bennet, Lisa; Bant, Sharyn
Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of
Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R
This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits for fiscal years 2014 -...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for fiscal years 2014...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits from federal fiscal year...
Sindorela Doli Kryeziu; Gentiana Muhaxhiri
In this paper we have tried to clarify the problems that are faced "gege dialect's'' speakers in Gjakova who have presented more or less difficulties in acquiring the standard. Standard language is part of the people language, but increased to the norm according the scientific criteria. From this observation it comes obliviously understandable that standard variation and dialectal variant are inseparable and, as such, they represent a macro linguistic unity. As part of this macro linguistic u...
Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen
To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.
Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth
Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to
Full Text Available Despite the abundance of electronic corpora now available to researchers, corpora of natural speech are still relatively rare and relatively costly. This paper suggests reasons why spoken corpora are needed, despite the formidable problems of construction. The multiple purposes of such corpora and the involvement of very different kinds of language communities in such projects mean that there is no one single blueprint for the design, markup, and distribution of spoken corpora. A number of different spoken corpora are reviewed to illustrate a range of possibilities for the construction of spoken corpora.
Ortega, Gerardo; Morgan, Gary
There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back…
Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…
Koyalan, Aylin; Mumford, Simon
The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…
Blumenfeld, Henrike K.; Marian, Viorica
Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842
Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa
Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…
Toledo, Paloma; Eosakul, Stanley T; Grobman, William A; Feinglass, Joe; Hasnain-Wynia, Romana
Hispanic women are less likely than non-Hispanic Caucasian women to use neuraxial labor analgesia. It is unknown whether there is a disparity in anticipated or actual use of neuraxial labor analgesia among Hispanic women based on primary language (English versus Spanish). In this 3-year retrospective, single-institution, cross-sectional study, we extracted electronic medical record data on Hispanic nulliparous with vaginal deliveries who were insured by Medicaid. On admission, patients self-identified their primary language and anticipated analgesic use for labor. Extracted data included age, marital status, labor type, delivery provider (obstetrician or midwife), and anticipated and actual analgesic use. Household income was estimated from census data geocoded by zip code. Multivariable logistic regression models were estimated for anticipated and actual neuraxial analgesia use. Among 932 Hispanic women, 182 were self-identified as primary Spanish speakers. Spanish-speaking Hispanic women were less likely to anticipate and use neuraxial anesthesia than English-speaking women. After controlling for confounders, there was an association between primary language and anticipated neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women, 0.70; 97.5% confidence interval, 0.53-0.92). Similarly, there was an association between language and neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women 0.88; 97.5% confidence interval, 0.78-0.99). The use of a midwife compared with an obstetrician also decreased the likelihood of both anticipating and using neuraxial analgesia. A language-based disparity was found in neuraxial labor analgesia use. It is possible that there are communication barriers in knowledge or understanding of analgesic options. Further research is necessary to determine the cause of this association.
Werfel, Krystal L
The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.
Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.
Full Text Available The commercial successes of spoken dialog systems in the developed world provide encouragement for their use in the developing world, where speech could play a role in the dissemination of relevant information in local languages. We investigate...
Westerveld, Marleen F; Gillon, Gail T
This investigation explored the effects of oral narrative elicitation context on children's spoken language performance. Oral narratives were produced by a group of 11 children with reading disability (aged between 7;11 and 9;3) and an age-matched control group of 11 children with typical reading skills in three different contexts: story retelling, story generation, and personal narratives. In the story retelling condition, the children listened to a story on tape while looking at the pictures in a book, before being asked to retell the story without the pictures. In the story generation context, the children were shown a picture containing a scene and were asked to make up their own story. Personal narratives were elicited with the help of photos and short narrative prompts. The transcripts were analysed at microstructure level on measures of verbal productivity, semantic diversity, and morphosyntax. Consistent with previous research, the results revealed no significant interactions between group and context, indicating that the two groups of children responded to the type of elicitation context in a similar way. There was a significant group effect, however, with the typical readers showing better performance overall on measures of morphosyntax and semantic diversity. There was also a significant effect of elicitation context with both groups of children producing the longest, linguistically most dense language samples in the story retelling context. Finally, the most significant differences in group performance were observed in the story retelling condition, with the typical readers outperforming the poor readers on measures of verbal productivity, number of different words, and percent complex sentences. The results from this study confirm that oral narrative samples can distinguish between good and poor readers and that the story retelling condition may be a particularly useful context for identifying strengths and weaknesses in oral narrative performance.
Reports on a morpheme study conducted with learners of English as a foreign language in Poland which found some similarities between foreign- and second-language accuracy in orders of morphemes. (Author/CB)
Dokter, Nanke; Aarts, Rian; Kurvers, Jeanne; Ros, Anje; Kroon, Sjaak
Students who are proficient academic language (AL) users, achieve better in school. To develop students’ AL register teachers’ AL input is necessary. The goal of this study was to investigate the extent of AL features in the language input first and second grade teachers give their students in whole
Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M
A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.
Krashen, Stephen D.
Involuntary mental rehearsal of foreign language words, sounds, and phrases is found consistent with current second language acquisition theory and case history reports. It is suggested the "din in the head" results from stimulation of the language acquisition device set off when the acquirer receives enough comprehensible input. (Author/MSE)
Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan
How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349
Colin, C; Zuinen, T; Bayard, C; Leybaert, J
Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung
This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.
This dissertation examines how adult learning of novel language structures is affected by the characteristics of the language input that learners are exposed to and by learners’ cognitive aptitudes, such as analytical ability and working memory capacity. In a series of experiments, adult native
Schreibman, Laura; Stahmer, Aubyn C
Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.
Chen, Wei; Mostow, Jack; Aist, Gregory
Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…
Shneidman, Laura A; Goldin-Meadow, Susan
Theories of language acquisition have highlighted the importance of adult speakers as active participants in children's language learning. However, in many communities children are reported to be directly engaged by their caregivers only rarely (Lieven, 1994). This observation raises the possibility that these children learn language from observing, rather than participating in, communicative exchanges. In this paper, we quantify naturally occurring language input in one community where directed interaction with children has been reported to be rare (Yucatec Mayan). We compare this input to the input heard by children growing up in large families in the United States, and we consider how directed and overheard input relate to Mayan children's later vocabulary. In Study 1, we demonstrate that 1-year-old Mayan children do indeed hear a smaller proportion of total input in directed speech than children from the US. In Study 2, we show that for Mayan (but not US) children, there are great increases in the proportion of directed input that children receive between 13 and 35 months. In Study 3, we explore the validity of using videotaped data in a Mayan village. In Study 4, we demonstrate that word types directed to Mayan children from adults at 24 months (but not word types overheard by children or word types directed from other children) predict later vocabulary. These findings suggest that adult talk directed to children is important for early word learning, even in communities where much of children's early language input comes from overheard speech. © 2012 Blackwell Publishing Ltd.
Full Text Available There is abundant research confirming that we pass through three stages on the path to full development of literacy, which includes the acquisition of academic language. The stages are: hearing stories, doing a great deal of self-selected reading, followed by reading for our own interest in our chosen specialization. At stages two and three, the reading is highly interesting or compelling to the reader. It is also specialized; there is no attempt to cover a wide variety. The research confirms that the library, in particular school library, makes a powerful contribution at all three stages: for many living in poverty it is the only place to find books for recreational reading or specialized interest reading, with the librarian serving as the guide on how to locate information as well as supplier of compelling reading. The expertise of certified librarians is pivotal for compelling reading in a foreign language, such as EFL worldwide and ELLs in the US, as well as compelling reading in children’s heritage languages.
Aarts, Rian; Demir, S.; Kurvers, J.J.H.; Henrichs, L.
The current study examined academic language (AL) input of mothers and teachers to 15 monolingual Dutch and 15 bilingual Turkish-Dutch 4- to 6-year-old children and its relationships with the children’s language development. At two times, shared book reading was videotaped and analyzed for academic
Aarts, Rian; Demir-Vegter, Serpil; Kurvers, Jeanne; Henrichs, Lotte
The current study examined academic language (AL) input of mothers and teachers to 15 monolingual Dutch and 15 bilingual Turkish-Dutch 4- to 6-year-old children and its relationships with the children's language development. At two times, shared book reading was videotaped and analyzed for academic features: lexical diversity, syntactic…
Full Text Available Based on previous studies showing that phonological awareness is related to reading abilities and that music training improves phonological processing, the aim of the present study was to test for the efficiency of a new method for teaching to read in a foreign language. Specifically, we tested the efficacy of a phonological training program, with and without musical support that aimed at improving early reading skills in 7-8-year-old Spanish children (n=63 learning English as a foreign language. Of interest was also to explore the impact of this training program on working memory and decoding skills. To achieve these goals we tested three groups of children before and after training: a control group, an experimental group with phonological non-musical intervention (active control, and an experimental group with musical intervention. Results clearly point to the beneficial effects of the phonological teaching approach but the further impact of the music support was not demonstrated. Moreover, while children in the music group showed low musical aptitudes before training, they nevertheless performed better than the control group. Therefore, the phonological training program with and without music support seem to have significant effects on early reading skills.
Fonseca-Mora, M C; Jara-Jiménez, Pilar; Gómez-Domínguez, María
Based on previous studies showing that phonological awareness is related to reading abilities and that music training improves phonological processing, the aim of the present study was to test for the efficiency of a new method for teaching to read in a foreign language. Specifically, we tested the efficacy of a phonological training program, with and without musical support that aimed at improving early reading skills in 7-8-year-old Spanish children (n = 63) learning English as a foreign language. Of interest was also to explore the impact of this training program on working memory and decoding skills. To achieve these goals we tested three groups of children before and after training: a control group, an experimental group with phonological non-musical intervention (active control), and an experimental group with musical intervention. Results clearly point to the beneficial effects of the phonological teaching approach but the further impact of the music support was not demonstrated. Moreover, while children in the music group showed low musical aptitudes before training, they nevertheless performed better than the control group. Therefore, the phonological training program with and without music support seem to have significant effects on early reading skills.
Prevoo, Mariëlle J L; Malda, Maike; Mesman, Judi; Emmen, Rosanneke A G; Yeniad, Nihal; Van Ijzendoorn, Marinus H; Linting, Mariëlle
When bilingual children enter formal reading education, host language proficiency becomes increasingly important. This study investigated the relation between socioeconomic status (SES), maternal language use, reading input, and vocabulary in a sample of 111 six-year-old children of first- and second-generation Turkish immigrant parents in the Netherlands. Mothers reported on their language use with the child, frequency of reading by both parents, and availability of children's books in the ethnic and the host language. Children's Dutch and Turkish vocabulary were tested during a home visit. SES was related to maternal language use and to host language reading input. Reading input mediated the relation between SES and host language vocabulary and between maternal language use and host language vocabulary, whereas only maternal language use was related to ethnic language vocabulary. During transition to formal reading education, one should be aware that children from low-SES families receive less host language reading input.
Full Text Available Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1 and second-language (L2. A total of 72 children (10 years old participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 sec crossed with two different SNRs (+3 dBA and +12 dBA. Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA improved recall compared to a low SNR (+3 dBA. Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time did not interact with language.
Brodie, Kara; Abel, Gary; Burt, Jenni
To investigate if language spoken at home mediates the relationship between ethnicity and doctor-patient communication for South Asian and White British patients. We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner-patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. There was strong evidence of an association between doctor-patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0-100) than White British patients (95% CI -4.9 to -1.1, p=0.002). This difference reduced to 1.4 points (95% CI -3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI -6.4 to -0.2). South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.
Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.
Language input is necessary for language learning, yet little is known about whether, in natural environments, the speech style and social context of language input to children impacts language development. In the present study we investigated the relationship between language input and language development, examining both the style of parental…
Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert
Background: Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and…
Full Text Available Language learning can occur anytime and anywhere (context. In term of context, language learning can take place whether at home context or at a study abroad context. This article presents the necessary background to existing literature and previous research about language development in various contexts, more specifically in a study abroad (SA context. Language learners who are studying abroad can lead to language development from a number of perspectives. Research findings revealed that language development can take a variety of forms including grammar, vocabulary, fluency, communicative skill, etc. These research findings will be reviewed in order to have a clear understanding about this issue. Then, this article continues to give a brief explanation on the role of input and interaction in SLA with some views on it.
Sedgwick, Carole; Garner, Mark
Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical
Oviatt, S; Bernard, J; Levow, G A
Fragile error handling in recognition-based systems is a major problem that degrades their performance, frustrates users, and limits commercial potential. The aim of the present research was to analyze the types and magnitude of linguistic adaptation that occur during spoken and multimodal human-computer error resolution. A semiautomatic simulation method with a novel error-generation capability was used to collect samples of users' spoken and pen-based input immediately before and after recognition errors, and at different spiral depths in terms of the number of repetitions needed to resolve an error. When correcting persistent recognition errors, results revealed that users adapt their speech and language in three qualitatively different ways. First, they increase linguistic contrast through alternation of input modes and lexical content over repeated correction attempts. Second, when correcting with verbatim speech, they increase hyperarticulation by lengthening speech segments and pauses, and increasing the use of final falling contours. Third, when they hyperarticulate, users simultaneously suppress linguistic variability in their speech signal's amplitude and fundamental frequency. These findings are discussed from the perspective of enhancement of linguistic intelligibility. Implications are also discussed for corroboration and generalization of the Computer-elicited Hyperarticulate Adaptation Model (CHAM), and for improved error handling capabilities in next-generation spoken language and multimodal systems.
Alejo, Rafael; Piquer-Píriz, Ana
The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…
The input factors that may cause variation in bilingual proficiency were investigated in 38 French-English bilinguals aged six to eight, of middle-to-high socio-economic status, attending an international state school in France. Data on children's current and cumulative language exposure and family background were collected through questionnaires…
The present study focuses on the influence of starting age and input on foreign language learning. In relation to starting age, the study investigates whether early starters in instructional settings achieve the same kind of long-term advantage as learners in naturalistic settings and it complements previous research by using data from oral…
Many recent studies have examined the effectiveness of various types of input enhancement. The following study expands this line of inquiry to include technological applications of language learning by comparing the effectiveness of the computer application of diacritics to a traditional pen-and-paper process among beginning students of French and…
Méndez Orellana, Carolina P; van de Sandt-Koenderman, Mieke E; Saliasi, Emi; van der Meulen, Ineke; Klip, Simone; van der Lugt, Aad; Smits, Marion
Melodic Intonation Therapy (MIT) uses the melodic elements of speech to improve language production in severe nonfluent aphasia. A crucial element of MIT is the melodically intoned auditory input: the patient listens to the therapist singing a target utterance. Such input of melodically intoned language facilitates production, whereas auditory input of spoken language does not. Using a sparse sampling fMRI sequence, we examined the differential auditory processing of spoken and melodically intoned language. Nineteen right-handed healthy volunteers performed an auditory lexical decision task in an event related design consisting of spoken and melodically intoned meaningful and meaningless items. The control conditions consisted of neutral utterances, either melodically intoned or spoken. Irrespective of whether the items were normally spoken or melodically intoned, meaningful items showed greater activation in the supramarginal gyrus and inferior parietal lobule, predominantly in the left hemisphere. Melodically intoned language activated both temporal lobes rather symmetrically, as well as the right frontal lobe cortices, indicating that these regions are engaged in the acoustic complexity of melodically intoned stimuli. Compared to spoken language, melodically intoned language activated sensory motor regions and articulatory language networks in the left hemisphere, but only when meaningful language was used. Our results suggest that the facilitatory effect of MIT may - in part - depend on an auditory input which combines melody and meaning. Combined melody and meaning provide a sound basis for the further investigation of melodic language processing in aphasic patients, and eventually the neurophysiological processes underlying MIT.
Full Text Available The primary condition for successful in second or foreign language learning is providing an adequate environment. It is as a medium of increasing the students’ language exposure in order to be able to success in acquiring second or foreign language profciency. This study was designed to propose the adequate English language input that can decrease the students’ anxiety in reading comprehension performance. Of the four skills, somehow reading can be regarded as especially important because reading is assumed to be the central means for learning new information. Some students, however, still encounter many problems in reading. It is because of their anxiety when they are reading. Providing and creating an interesting-contextual reading material and gratifed teachers can make out this problem which occurs mostly in Indonesian’s classrooms. It revealed that the younger learners of English there do not received adequate amount of the target language input in their learning of English. Hence, it suggested the adoption of extensive reading programs as the most effective means in the creation of an input-rich environment in EFL learning contexts. Besides they also give suggestion to book writers and publisher to provide myriad books that appropriate and readable for their students.
Schaefer, Blanca; Stackhouse, Joy; Wells, Bill
There is strong empirical evidence that English-speaking children with spoken language difficulties (SLD) often have phonological awareness (PA) deficits. The aim of this study was to explore longitudinally if this is also true of pre-school children speaking German, a language that makes extensive use of derivational morphemes which may impact on the acquisition of different PA levels. Thirty 4-year-old children with SLD were assessed on 11 PA subtests at three points over a 12-month period and compared with 97 four-year-old typically developing (TD) children. The TD-group had a mean percentage correct of over 50% for the majority of tasks (including phoneme tasks) and their PA skills developed significantly over time. In contrast, the SLD-group improved their PA performance over time on syllable and rhyme, but not on phoneme level tasks. Group comparisons revealed that children with SLD had weaker PA skills, particularly on phoneme level tasks. The study contributes a longitudinal perspective on PA development before school entry. In line with their English-speaking peers, German-speaking children with SLD showed poorer PA skills than TD peers, indicating that the relationship between SLD and PA is similar across these two related but different languages.
This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…
Janciauskas, Marius; Chang, Franklin
Language learning requires linguistic input, but several studies have found that knowledge of second language (L2) rules does not seem to improve with more language exposure (e.g., Johnson & Newport, 1989). One reason for this is that previous studies did not factor out variation due to the different rules tested. To examine this issue, we reanalyzed grammaticality judgment scores in Flege, Yeni-Komshian, and Liu's (1999) study of L2 learners using rule-related predictors and found that, in addition to the overall drop in performance due to a sensitive period, L2 knowledge increased with years of input. Knowledge of different grammar rules was negatively associated with input frequency of those rules. To better understand these effects, we modeled the results using a connectionist model that was trained using Korean as a first language (L1) and then English as an L2. To explain the sensitive period in L2 learning, the model's learning rate was reduced in an age-related manner. By assigning different learning rates for syntax and lexical learning, we were able to model the difference between early and late L2 learners in input sensitivity. The model's learning mechanism allowed transfer between the L1 and L2, and this helped to explain the differences between different rules in the grammaticality judgment task. This work demonstrates that an L1 model of learning and processing can be adapted to provide an explicit account of how the input and the sensitive period interact in L2 learning. © 2017 The Authors. Cognitive Science - A Multidisciplinary Journal published by Wiley Periodicals, Inc.
Allen, Mark D; Owens, Tyler E
Allen [Allen, M. D. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. Brain and Language, 95, 255-264] presents evidence from a single patient, WBN, to motivate a theory of lexical processing and representation in which syntactic information may be encoded and retrieved independently of semantic information. In his critique, Kemmerer argues that because Allen depended entirely on preposition-based verb subcategory violations to test WBN's knowledge of correct argument structure, his results, at best, address a "strawman" theory. This argument rests on the assumption that preposition subcategory options are superficial syntactic phenomena which are not represented by argument structure proper. We demonstrate that preposition subcategory is in fact treated as semantically determined argument structure in the theories that Allen evaluated, and thus far from irrelevant. In further discussion of grammatically relevant versus irrelevant semantic features, Kemmerer offers a review of his own studies. However, due to an important design shortcoming in these experiments, we remain unconvinced. Reemphasizing the fact the Allen (2005) never claimed to rule out all semantic contributions to syntax, we propose an improvement in Kemmerer's approach that might provide more satisfactory evidence on the distinction between the kinds of relevant versus irrelevant features his studies have addressed.
Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hoshino, Takahiro; Hagiwara, Hiroko
Children's foreign-language (FL) learning is a matter of much social as well as scientific debate. Previous behavioral research indicates that starting language learning late in life can lead to problems in phonological processing. Inadequate phonological capacity may impede lexical learning and semantic processing (phonological bottleneck hypothesis). Using both behavioral and neuroimaging data, here we examine the effects of age of first exposure (AOFE) and total hours of exposure (HOE) to English, on 350 Japanese primary-school children's semantic processing of spoken English. Children's English proficiency scores and N400 event-related brain potentials (ERPs) were analyzed in multiple regression analyses. The results showed (1) that later, rather than earlier, AOFE led to higher English proficiency and larger N400 amplitudes, when HOE was controlled for; and (2) that longer HOE led to higher English proficiency and larger N400 amplitudes, whether AOFE was controlled for or not. These data highlight the important role of amount of exposure in FL learning, and cast doubt on the view that starting FL learning earlier always produces better results. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits for fiscal...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from fiscal...
Lederberg, Amy R; Schick, Brenda; Spencer, Patricia E
Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to early identification/intervention, advanced technologies (e.g., cochlear implants), and perceptually accessible language models. DHH children develop sign language in a similar manner as hearing children develop spoken language, provided they are in a language-rich environment. This occurs naturally for DHH children of deaf parents, who constitute 5% of the deaf population. For DHH children of hearing parents, sign language development depends on the age that they are exposed to a perceptually accessible 1st language as well as the richness of input. Most DHH children are born to hearing families who have spoken language as a goal, and such development is now feasible for many children. Some DHH children develop spoken language in bilingual (sign-spoken language) contexts. For the majority of DHH children, spoken language development occurs in either auditory-only contexts or with sign supports. Although developmental trajectories of DHH children with hearing parents have improved with early identification and appropriate interventions, the majority of children are still delayed compared with hearing children. These DHH children show particular weaknesses in the development of grammar. Language deficits and differences have cascading effects in language-related areas of development, such as theory of mind and literacy development.
Bilingual acquisition largely depends on the input that children receive in each language. The more input in a language, the more proficient a child becomes in that language. The current project studied the role of language input among bilingual Frisian-Dutch preschool children (age 2.5-4 years) in
Alt, Mary; Gutmann, Michelle L
This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.
Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun
The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.
Houston, K. Todd
Since 1946, Utah State University (USU) has offered specialized coursework in audiology and speech-language pathology, awarding the first graduate degrees in 1948. In 1965, the teacher training program in deaf education was launched. Over the years, the Department of Communicative Disorders and Deaf Education (COMD-DE) has developed a rich history…
Chen, Pei-Hua; Liu, Ting-Wei
Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…
Nelson, Sarah; McDuffie, Andrea; Banasik, Amy; Tempero Feigles, Robyn; Thurman, Angela John; Abbeduto, Leonard
This study examined the impact of a distance-delivered parent-implemented narrative language intervention on the use of inferential language during shared storytelling by school-aged boys with fragile X syndrome, an inherited neurodevelopmental disorder. Nineteen school-aged boys with FXS and their biological mothers participated. Dyads were randomly assigned to an intervention or a treatment-as-usual comparison group. Transcripts from all pre- and post-intervention sessions were coded for child use of prompted and spontaneous inferential language coded into various categories. Children in the intervention group used more utterances that contained inferential language than the comparison group at post-intervention. Furthermore, children in the intervention group used more prompted inferential language than the comparison group at post-intervention, but there were no differences between the groups in their spontaneous use of inferential language. Additionally, children in the intervention group demonstrated increases from pre- to post-intervention in their use of most categories of inferential language. This study provides initial support for the utility of a parent-implemented language intervention for increasing the use of inferential language by school aged boys with FXS, but also suggests the need for additional treatment to encourage spontaneous use. Copyright © 2018 Elsevier Inc. All rights reserved.
Weisberg, Jill; McCullough, Stephen; Emmorey, Karen
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161
Šimáčková, Š.; Podlipský, V.J.; Chládková, K.
As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,
Rowe, Meredith L.; Levine, Susan C.; Fisher, Joan A.; Goldin-Meadow, Susan
Children with unilateral pre- or perinatal brain injury (BI) show remarkable plasticity for language learning. Previous work highlights the important role that lesion characteristics play in explaining individual variation in plasticity in the language development of children with BI. The current study examines whether the linguistic input that children with BI receive from their caregivers also contributes to this early plasticity, and whether linguistic input plays a similar rol...
Rowe, Meredith L.; Levine, Susan C.; Fisher, Joan A.; Goldin-Meadow, Susan
Children with unilateral pre- or perinatal brain injury (BI) show remarkable plasticity for language learning. Previous work highlights the important role that lesion characteristics play in explaining individual variation in plasticity in the language development of children with BI. The current study examines whether the linguistic input that…
Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S
Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lentz, Tomas O; Kager, René W J
Probabilistic phonotactic knowledge facilitates perception, but categorical phonotactic illegality can cause misperceptions, especially of non-native phoneme combinations. If misperceptions induced by first language (L1) knowledge filter second language input, access to second language (L2) probabilistic phonotactics is potentially blocked for L2 acquisition. The facilitatory effects of L2 probabilistic phonotactics and categorical filtering effects of L1 phonotactics were compared and contrasted in a series of cross-modal priming experiments. Dutch native listeners and L1 Spanish and Japanese learners of Dutch had to perform a lexical decision task on Dutch words that started with /sC/ clusters that were of different degrees of probabilistic wellformedness in Dutch but illegal in Spanish and Japanese. Versions of target words with Spanish illegality resolving epenthesis in the clusters primed the Spanish group, showing an L1 filter; a similar effect was not found for the Japanese group. In addition, words with wellformed /sC/ clusters were recognised faster, showing a positive effect on processing of probabilistic wellformedness. However, Spanish learners with higher proficiency were facilitated to a greater extent by wellformed but epenthesised clusters, showing that although probabilistic learning occurs in spite of the L1 filter, the acquired probabilistic knowledge is still affected by L1 categorical knowledge. Categorical phonotactic and probabilistic knowledge are of a different nature and interact in acquisition.
Prevoo, Mariëlle J. L.; Malda, Maike; Mesman, Judi; Emmen, Rosanneke A. G.; Yeniad, Nihal; Van Ijzendoorn, Marinus; Linting, Mariëlle
When bilingual children enter formal reading education, host language proficiency becomes increasingly important. This study investigated the relation between socioeconomic status (SES), maternal language use, reading input, and vocabulary in a sample of 111 six-year-old children of first- and second-generation Turkish immigrant parents in the…
LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI
Süleyman Kaan YALÇIN (M.A.H.
Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.
Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua
Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…
The comparatively small vowel inventory of Bantu languages leads young Bantu learners to produce "undifferentiations," so that, for example, the spoken forms of "hat,""hut,""heart" and "hurt" sound the same to a British ear. The two criteria for a non-native speaker's spoken performance are…
Blank, Idan A; Fedorenko, Evelina
Language comprehension engages a cortical network of left frontal and temporal regions. Activity in this network is language-selective, showing virtually no modulation by nonlinguistic tasks. In addition, language comprehension engages a second network consisting of bilateral frontal, parietal, cingulate, and insular regions. Activity in this "multiple demand" (MD) network scales with comprehension difficulty, but also with cognitive effort across a wide range of nonlinguistic tasks in a domain-general fashion. Given the functional dissociation between the language and MD networks, their respective contributions to comprehension are likely distinct, yet such differences remain elusive. Prior neuroimaging studies have suggested that activity in each network covaries with some linguistic features that, behaviorally, influence on-line processing and comprehension. This sensitivity of the language and MD networks to local input characteristics has often been interpreted, implicitly or explicitly, as evidence that both networks track linguistic input closely, and in a manner consistent across individuals. Here, we used fMRI to directly test this assumption by comparing the BOLD signal time courses in each network across different people ( n = 45, men and women) listening to the same story. Language network activity showed fewer individual differences, indicative of closer input tracking, whereas MD network activity was more idiosyncratic and, moreover, showed lower reliability within an individual across repetitions of a story. These findings constrain cognitive models of language comprehension by suggesting a novel distinction between the processes implemented in the language and MD networks. SIGNIFICANCE STATEMENT Language comprehension recruits both language-specific mechanisms and domain-general mechanisms that are engaged in many cognitive processes. In the human cortex, language-selective mechanisms are implemented in the left-lateralized "core language network
Brown, C.M.; Berkum, J.J.A. van; Hagoort, P.
A study is presented on the effects of discourse-semantic and lexical-syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior
Leow, Ronald P.
Investigated the effects of written input enhancement and text length on college students' second-language comprehension and intake. First-year Spanish students were exposed to one of four conditions with enhanced and non-enhanced short and long text. Exposing students to short authentic reading materials facilitated reading comprehension but not…
Morgan, James L.; And Others
The role of cues in language acquisition was examined in three experiments. When the cue marked the phrase structure of sentences, adult subjects successfully learned syntax. When input was identical but lacked that cue, subjects failed to learn significant portions of syntax. (Author/GDC)
Hoog, Brigitte E.; Langereis, Margreet C.; Weerdenburg, Marjolijn; Knoors, Harry E. T.; Verhoeven, Ludo
Background: The spoken language difficulties of children with moderate or severe to profound hearing loss are mainly related to limited auditory speech perception. However, degraded or filtered auditory input as evidenced in children with cochlear implants (CIs) may result in less efficient or slower language processing as well. To provide insight…
Leonard, Laurence B.; Fey, Marc E.; Deevy, Patricia; Bredin-Oja, Shelley L.
We tested four predictions based on the assumption that optional infinitives can be attributed to properties of the input whereby children inappropriately extract nonfinite subject-verb sequences (e.g. the girl run) from larger input utterances (e.g. Does the girl run? Let’s watch the girl run). Thirty children with specific language impairment (SLI) and 30 typically developing children heard novel and familiar verbs that appeared exclusively either in utterances containing nonfinite subject-verb sequences or in simple sentences with the verb inflected for third person singular –s. Subsequent testing showed strong input effects, especially for the SLI group. The results provide support for input-based factors as significant contributors not only to the optional infinitive period in typical development, but also to the especially protracted optional infinitive period seen in SLI. PMID:25076070
Qu, Qingqing; Damian, Markus F
Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.
Macleod, Andrea An; Fabiano-Smith, Leah; Boegner-Pagé, Sarah; Fontolliet, Salomé
Parents often turn to educators and healthcare professionals for advice on how to best support their child's language development. These professionals frequently suggest implementing the 'one-parent-one-language' approach to ensure consistent exposure to both languages. The goal of this study was to understand how language exposure influences the receptive vocabulary development of simultaneous bilingual children. To this end, we targeted nine German-French children growing up in bilingual families. Their exposure to each language within and outside the home was measured, as were their receptive vocabulary abilities in German and French. The results indicate that children are receiving imbalanced exposure to each language. This imbalance is leading to a slowed development of the receptive vocabulary in the minority language, while the majority language is keeping pace with monolingual peers. The one-parent-one-language approach does not appear to support the development of both of the child's languages in the context described in the present study. Bilingual families may need to consider other options for supporting the bilingual language development of their children. As professionals, we need to provide parents with advice that is based on available data and that is flexible with regards to the current and future needs of the child and his family.
Vernon-Feagans, Lynne; Bratsch-Hines, Mary E
Recent research has suggested that high quality child care can buffer young children against poorer cognitive and language outcomes when they are at risk for poorer language and readiness skills. Most of this research measured the quality of parenting and the quality of the child care with global observational measures or rating scales that did not specify the exact maternal or caregiver behaviors that might be causally implicated in the buffering of these children from poor outcomes. The current study examined the actual language by the mother to her child in the home and the verbal interactions between the caregiver and child in the child care setting that might be implicated in the buffering effect of high quality childcare. The sample included 433 rural children from the Family Life Project who were in child care at 36 months of age. Even after controlling for a variety of covariates, including maternal education, income, race, child previous skill, child care type, the overall quality of the home and quality of the child care environment; observed positive caregiver-child verbal interactions in the child care setting interacted with the maternal language complexity and diversity in predicting children's language development. Caregiver-child positive verbal interactions appeared to buffer children from poor language outcomes concurrently and two years later if children came from homes where observed maternal language complexity and diversity during a picture book task was less.
Lipski, John M.
The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)
Pegrum, Mark; Hartley, Linda; Wechtler, Veronique
Foreign-language cinema has generally been relegated to a minor role in language education. This paper reports on and analyses the results of a survey of attitudes towards foreign film among UK university students of French, German and Spanish. The findings reveal students' limited exposure to and relative lack of familiarity with non-anglophone…
The present study analyses the Family Language Policy (FLP) in regards language literacy development of children in Ethiopian immigrant families. Bridging the gap between linguistic literacy at home and at school hinders a smooth societal integration and a normative literacy development. This study describes the home literacy patterns shaped by…
Cenoz, J.; Gorter, D.
In this article we explore the role that the linguistic landscape, in the sense of all the written language in the public space, can have in second language acquisition (SLA). The linguistic landscape has symbolic and informative functions and it is multimodal, because it combines visual and printed
Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.
Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S
Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.
Rinaldi, M Cristina; Pizzamiglio, Luigi
We present data from right brain-damaged patients, with and without spatial heminattention, which show the influence of hemispatial deficits on spoken language processing. We explored the findings of a previous study, which used an emphatic stress detection task and suggested spatial transcoding of a spoken active sentence in a 'language line'. This transcoding was impaired in its initial portion (the subject-word) when the neglect syndrome was present. By expanding the original methodology, the present study provides a deeper understanding of the level of spoken language processing involved in the heminattentional bias. To ascertain the role played by syntactic structure, active and passive sentences were compared. Sentences comprised of musical notes and of a sequence of unrelated nouns were also compared to determine whether the bias was manifest with any sequence of events (not only linguistic ones) deployed over time, and with a sequence of linguistic events not embedded in a structured syntactic frame. Results showed that heminattention exerted an influence only when a syntactically structured linguistic input (=sentence with agent of action, action and recipient of action) was processed, and that it did not interfere when a sequence of non-linguistic sounds or unrelated words was presented. Furthermore, when passing from active to passive sentences, the heminattentional bias was inverted, suggesting that heminattention primarily involves the logical subject of the sentence, which has an inverted position in passive sentences. These results strongly suggest that heminattention acts on the spatial transcoding of the deep structure of spoken language.
Cooper, Angela; Bradlow, Ann R.
Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...
Mishra, Ramesh Kumar; Singh, Niharika
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
Feng, Qiangze; Qi, Hongwei; Fukushima, Toshikazu
Information services accessed via mobile phones provide information directly relevant to subscribers’ daily lives and are an area of dynamic market growth worldwide. Although many information services are currently offered by mobile operators, many of the existing solutions require a unique gateway for each service, and it is inconvenient for users to have to remember a large number of such gateways. Furthermore, the Short Message Service (SMS) is very popular in China and Chinese users would prefer to access these services in natural language via SMS. This chapter describes a Natural Language Based Service Selection System (NL3S) for use with a large number of mobile information services. The system can accept user queries in natural language and navigate it to the required service. Since it is difficult for existing methods to achieve high accuracy and high coverage and anticipate which other services a user might want to query, the NL3S is developed based on a Multi-service Ontology (MO) and Multi-service Query Language (MQL). The MO and MQL provide semantic and linguistic knowledge, respectively, to facilitate service selection for a user query and to provide adaptive service recommendations. Experiments show that the NL3S can achieve 75-95% accuracies and 85-95% satisfactions for processing various styles of natural language queries. A trial involving navigation of 30 different mobile services shows that the NL3S can provide a viable commercial solution for mobile operators.
Goldman, Jerry; Renals, Steve; Bird, Steven; de Jong, Franciska; Federico, Marcello; Fleischhauer, Carl; Kornbluh, Mark; Lamel, Lori; Oard, Douglas W; Stewart, Claire; Wright, Richard
Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs. This paper outlines the major issues in the field, reviews the current state of technology, examines the rapidly changing policy issues relating to privacy and copyright, and presents issues relati...
Flores, Glenn; Tomany-Korman, Sandra C
Fifty-five million Americans speak a non-English primary language at home, but little is known about health disparities for children in non-English-primary-language households. Our study objective was to examine whether disparities in medical and dental health, access to care, and use of services exist for children in non-English-primary-language households. The National Survey of Childhood Health was a telephone survey in 2003-2004 of a nationwide sample of parents of 102 353 children 0 to 17 years old. Disparities in medical and oral health and health care were examined for children in a non-English-primary-language household compared with children in English- primary-language households, both in bivariate analyses and in multivariable analyses that adjusted for 8 covariates (child's age, race/ethnicity, and medical or dental insurance coverage, caregiver's highest educational attainment and employment status, number of children and adults in the household, and poverty status). Children in non-English-primary-language households were significantly more likely than children in English-primary-language households to be poor (42% vs 13%) and Latino or Asian/Pacific Islander. Significantly higher proportions of children in non-English-primary-language households were not in excellent/very good health (43% vs 12%), were overweight/at risk for overweight (48% vs 39%), had teeth in fair/poor condition (27% vs 7%), and were uninsured (27% vs 6%), sporadically insured (20% vs 10%), and lacked dental insurance (39% vs 20%). Children in non-English-primary-language households more often had no usual source of medical care (38% vs 13%), made no medical (27% vs 12%) or preventive dental (14% vs 6%) visits in the previous year, and had problems obtaining specialty care (40% vs 23%). Latino and Asian children in non-English-primary-language households had several unique disparities compared with white children in non-English-primary-language households. Almost all disparities
Nguyen, Minh Thi Thuy; Pham, Hanh Thi; Pham, Tam Minh
This study investigates the combined effects of input enhancement and recasts on a group of Vietnamese EFL learners' performance of constructive criticism during peer review activities. Particularly, the study attempts to find out whether the instruction works for different aspects of pragmatic learning, including the learners' sociopragmatic and…
Tan, Li Hai; Xu, Min; Chang, Chun Qi; Siok, Wai Ting
Written Chinese as a logographic system was developed over 3,000 y ago. Historically, Chinese children have learned to read by learning to associate the visuo-graphic properties of Chinese characters with lexical meaning, typically through handwriting. In recent years, however, many Chinese children have learned to use electronic communication devices based on the pinyin input method, which associates phonemes and English letters with characters. When children use pinyin to key in letters, th...
Pfau, R.; Steinbach, M.; Woll, B.
Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of
Elaheh Hamed Mahvelati
Full Text Available Many researchers stress the importance of lexical coherence and emphasize the need for teaching collocations at all levels of language proficiency. Thus, this study was conducted to measure the relative effectiveness of explicit (consciousness-raising approach versus implicit (input flood collocation instruction with regard to learners’ knowledge of both lexical and grammatical collocations. Ninety-five upper-intermediate learners, who were randomly assigned to the control and experimental groups, served as the participants of this study. While one of the experimental groups was provided with input flood treatment, the other group received explicit collocation instruction. In contrast, the participants in the control group did not receive any instruction on learning collocations. The results of the study, which were collected through pre-test, immediate post-test and delayed post-test, revealed that although both methods of teaching collocations proved effective, the explicit method of consciousness-raising approach was significantly superior to the implicit method of input flood treatment.
Le Clec'H, G; Dehaene, S; Cohen, L; Mehler, J; Dupoux, E; Poline, J B; Lehéricy, S; van de Moortele, P F; Le Bihan, D
Some models of word comprehension postulate that the processing of words presented in different modalities and languages ultimately converges toward common cerebral systems associated with semantic-level processing and that the localization of these systems may vary with the category of semantic knowledge being accessed. We used functional magnetic resonance imaging to investigate this hypothesis with two categories of words, numerals, and body parts, for which the existence of distinct category-specific areas is debated in neuropsychology. Across two experiments, one with a blocked design and the other with an event-related design, a reproducible set of left-hemispheric parietal and prefrontal areas showed greater activation during the manipulation of topographical knowledge about body parts and a right-hemispheric parietal network during the manipulation of numerical quantities. These results complement the existing neuropsychological and brain-imaging literature by suggesting that within the extensive network of bilateral parietal regions active during both number and body-part processing, a subset shows category-specific responses independent of the language and modality of presentation. Copyright 2000 Academic Press.
Full Text Available We describe a 57 year old, right handed, English speaking man initially diagnosed with progressive aphasia. Language assessment revealed inconsistent performance in key areas. Expressive language was reduced to a few short, perseverative phrases. Speech was severely apraxic. Primary modes of communication included gesture, pointing, gaze, physical touch and leading. Responses were 100% accurate when he was provided with written words, with random or inaccurate responses for strictly auditory/verbal input. When instructions to subsequent neuropsychological tests were written instead of spoken, performance improved markedly. A comprehensive audiology assessment revealed no hearing impairment. Neuroimaging was unremarkable. Neurobehavioral evaluation utilizing written input led to diagnoses of word deafness and frontotemporal dementia, resulting in very different management. We highlight the need for alternative modes of language input for assessment and treatment of patients with language comprehension symptoms.
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Knoors, H.E.; Verhoeven, L.
BACKGROUND: The spoken language difficulties of children with moderate or severe to profound hearing loss are mainly related to limited auditory speech perception. However, degraded or filtered auditory input as evidenced in children with cochlear implants (CIs) may result in less efficient or
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Knoors, H.E.T.; Verhoeven, L.T.W.
Background: The spoken language difficulties of children with moderate or severe to profound hearing loss are mainly related to limited auditory speech perception. However, degraded or filtered auditory input as evidenced in children with cochlear implants (CIs) may result in less efficient or
Liu, Y.; Wang, M.; Perfetti, C.A.; Brubaker, B.; Wu, S.M.; MacWhinney, B.
Learning the Chinese tone system is a major challenge to students of Chinese as a second or foreign language. Part of the problem is that the spoken Chinese syllable presents a complex perceptual input that overlaps tone with segments. This complexity can be addressed through directing attention to
By elaborating the definition of listening comprehension, the characteristic of spoken discourse, the relationship between STM and LTM and Krashen's comprehensible input, the paper puts forward the point that the priority of listening comprehension over speaking in the language acquisition process is very necessary.
Second languages (L2s) are often learned through spoken and written input, and L2 orthographic forms (spellings) can lead to non-native-like pronunciation. The present study investigated whether orthography can lead experienced learners of English[subscript L2] to make a phonological contrast in their speech production that does not exist in…
Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)
Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na
Bisson, Marie-Josée; van Heuven, Walter J B; Conklin, Kathy; Tunney, Richard J
Prior research has reported incidental vocabulary acquisition with complete beginners in a foreign language (FL), within 8 exposures to auditory and written FL word forms presented with a picture depicting their meaning. However, important questions remain about whether acquisition occurs with fewer exposures to FL words in a multimodal situation and whether there is a repeated exposure effect. Here we report a study where the number of exposures to FL words in an incidental learning phase varied between 2, 4, 6, and 8 exposures. Following the incidental learning phase, participants completed an explicit learning task where they learned to recognize written translation equivalents of auditory FL word forms, half of which had occurred in the incidental learning phase. The results showed that participants performed better on the words they had previously been exposed to, and that this incidental learning effect occurred from as little as 2 exposures to the multimodal stimuli. In addition, repeated exposure to the stimuli was found to have a larger impact on learning during the first few exposures and decrease thereafter, suggesting that the effects of repeated exposure on vocabulary acquisition are not necessarily constant.
Ramscar, Michael; Dye, Melody
Do the production and interpretation of patterns of plural forms in noun-noun compounds reveal the workings of innate constraints that govern morphological processing? The results of previous studies on compounding have been taken to support a number of important theoretical claims: first, that there are fundamental differences in the way that children and adults learn and process regular and irregular plurals, second, that these differences reflect formal constraints that govern the way the way regular and irregular plurals are processed in language, and third, that these constraints are unlikely to be the product of learning. In a series of seven experiments, we critically assess the evidence that is cited in support of these arguments. The results of our experiments provide little support for the idea that substantively different factors govern the patterns of acquisition, production and interpretation patterns of regular and irregular plural forms in compounds. Once frequency differences between regular and irregular plurals are accounted for, we find no evidence of any qualitative difference in the patterns of interpretation and production of regular and irregular plural nouns in compounds, in either adults or children. Accordingly, we suggest that the pattern of acquisition of both regular and irregular plurals in compounds is consistent with a simple account, in which children learn the conventions that govern plural compounding using evidence that is readily available in the distribution patterns of adult speech. Copyright © 2010 Elsevier Inc. All rights reserved.
Daniel R Saunders
Full Text Available Information acquisition, the gathering and interpretation of sensory information, is a basic function of mobile organisms. We describe a new method for measuring this ability in humans, using free-recall responses to sensory stimuli which are scored objectively using a "wisdom of crowds" approach. As an example, we demonstrate this metric using perception of video stimuli. Immediately after viewing a 30 s video clip, subjects responded to a prompt to give a short description of the clip in natural language. These responses were scored automatically by comparison to a dataset of responses to the same clip by normally-sighted viewers (the crowd. In this case, the normative dataset consisted of responses to 200 clips by 60 subjects who were stratified by age (range 22 to 85 y and viewed the clips in the lab, for 2,400 responses, and by 99 crowdsourced participants (age range 20 to 66 y who viewed clips in their Web browser, for 4,000 responses. We compared different algorithms for computing these similarities and found that a simple count of the words in common had the best performance. It correctly matched 75% of the lab-sourced and 95% of crowdsourced responses to their corresponding clips. We validated the measure by showing that when the amount of information in the clip was degraded using defocus lenses, the shared word score decreased across the five predetermined visual-acuity levels, demonstrating a dose-response effect (N = 15. This approach, of scoring open-ended immediate free recall of the stimulus, is applicable not only to video, but also to other situations where a measure of the information that is successfully acquired is desirable. Information acquired will be affected by stimulus quality, sensory ability, and cognitive processes, so our metric can be used to assess each of these components when the others are controlled.
Input Skewedness, Consistency, and Order of Frequent Verbs in Frequency-Driven Second Language Construction Learning: A Replication and Extension of Casenhiser and Goldberg (2005) to Adult Second Language Acquisition
Recent usage-based models of language acquisition research has found that three frequency manipulations; (1) skewed input (Casenhiser & Goldberg 2005), (2) input consistency (Childers & Tomasello 2001), and (3) order of frequent verbs (Goldberg, Casenhiser, & White 2007) facilitated construction learning in children. The present paper addresses…
Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E
The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.
Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena
The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…
Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.
Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,
Kobayashi, Yuichiro; Abe, Mariko
The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…
ter Maat, Mark; Heylen, Dirk K.J.; Vilhjálmsson, Hannes; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn
This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.
English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…
In spite of the vast numbers of articles devoted to vocabulary acquisition in a foreign language, few studies address the contribution of lexical knowledge to spoken fluency. The present article begins with basic definitions of the temporal characteristics of oral fluency, summarizing L1 research over several decades, and then presents fluency…
This paper sets out to examine the phonological interference in the spoken English performance of the Izon speaker. It emphasizes that the level of interference is not just as a result of the systemic differences that exist between both language systems (Izon and English) but also as a result of the interlanguage factors such ...
Corpus-based grammars, notably "Cambridge Grammar of English," give explicit information on the forms and use of native-speaker grammar, including spoken grammar. Native-speaker norms as a necessary goal in language teaching are contested by supporters of English as a Lingua Franca (ELF); however, this article argues for the inclusion of selected…
de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.
Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and
Moran, Catherine; Kirk, Cecilia; Powell, Emma
Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…
Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...
Ricketts, Jessie; Dockrell, Julie E; Patel, Nita; Charman, Tony; Lindsay, Geoff
This experiment investigated whether children with specific language impairment (SLI), children with autism spectrum disorders (ASD), and typically developing children benefit from the incidental presence of orthography when learning new oral vocabulary items. Children with SLI, children with ASD, and typically developing children (n=27 per group) between 8 and 13 years of age were matched in triplets for age and nonverbal reasoning. Participants were taught 12 mappings between novel phonological strings and referents; half of these mappings were trained with orthography present and half were trained with orthography absent. Groups did not differ on the ability to learn new oral vocabulary, although there was some indication that children with ASD were slower than controls to identify newly learned items. During training, the ASD, SLI, and typically developing groups benefited from orthography to the same extent. In supplementary analyses, children with SLI were matched in pairs to an additional control group of younger typically developing children for nonword reading. Compared with younger controls, children with SLI showed equivalent oral vocabulary acquisition and benefit from orthography during training. Our findings are consistent with current theoretical accounts of how lexical entries are acquired and replicate previous studies that have shown orthographic facilitation for vocabulary acquisition in typically developing children and children with ASD. We demonstrate this effect in SLI for the first time. The study provides evidence that the presence of orthographic cues can support oral vocabulary acquisition, motivating intervention approaches (as well as standard classroom teaching) that emphasize the orthographic form. Copyright © 2015 Elsevier Inc. All rights reserved.
resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within
Given the nature of spoken text, the first requirement of an appropriate grammar is its ability to account for stretches of language (including recurring types of text or genres), in addition to clause level patterns. Second, the grammatical model needs to be part of a wider theory of language that recognises the functional nature and educational purposes of spoken text. The model also needs to be designed in a\\ud sufficiently comprehensive way so as to account for grammatical forms in speech...
Full Text Available This paper frames a novel methodology for spoken document information retrieval to the spontaneous speech corpora and converting the retrieved document into the corresponding language text. The proposed work involves the three major areas namely spoken keyword detection, spoken document retrieval and automatic speech recognition. The keyword spotting is concerned with the exploit of the distribution capturing capability of the Auto Associative Neural Network (AANN for spoken keyword detection. It involves sliding a frame-based keyword template along the audio documents and by means of confidence score acquired from the normalized squared error of AANN to search for a match. This work benevolences a new spoken keyword spotting algorithm. Based on the match the spoken documents are retrieved and clustered together. In speech recognition step, the retrieved documents are converted into the corresponding language text using the AANN classifier. The experiments are conducted using the Dravidian language database and the results recommend that the proposed method is promising for retrieving the relevant documents of a spoken query as a key and transform it into the corresponding language.
Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting
Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…
Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery
Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.
van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…
Brimo, Danielle; Lund, Emily; Sapp, Alysha
Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below
Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas
Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.
In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and accurate use. Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...
Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford
We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.
Full Text Available An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS. Participants were deaf (n = 25 and hearing (n = 25 Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both
Mastrantuono, Eliana; Saldaña, David; Rodríguez-Ortiz, Isabel R
An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf ( n = 25) and hearing ( n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and
I Nengah Sudipa
Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.
Roč. 68, č. 2 (2017), s. 305-315 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : correlative conjunctions * spoken Czech * cohesion Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf
Thompson, Robert; Tanimoto, Steven; Abbott, Robert; Nielsen, Kathleen; Lyman, Ruby Dawn; Geselowitz, Kira; Habermann, Katrien; Mickail, Terry; Raskind, Marshall; Peverly, Stephen; Nagy, William; Berninger, Virginia
This study in programmatic research on technology-supported instruction first identified, through pretesting using evidence-based criteria, students with persisting specific learning disabilities (SLDs) in written language during middle childhood (grades 4-6) and early adolescence (grades 7-9). Participants then completed computerized writing instruction and posttesting. The 12 computer lessons varied output modes (letter production by stylus alternating with hunt and peck keyboarding versus by pencil with grooves alternating with touch typing on keyboard), input (read or heard source material), and task (notes or summaries). Posttesting and coded notes and summaries showed the effectiveness of computerized writing instruction on both writing tasks for multiple modes of language input and letter production output for improving letter production and related writing skills.
Kowal, Sabine; O'Connell, Daniel C
The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.
Yoder, Paul J; Woynaroski, Tiffany; Fey, Marc E; Warren, Steven F; Gardner, Elizabeth
In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only the participants with DS, we found that more therapy led to larger spoken vocabularies at posttreatment because it increased children's canonical syllabic communication and receptive vocabulary growth early in the treatment phase.
Caroline M. Whiting
Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.
Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua
Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word
It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the
Full Text Available Adult second language (L2 learners have often been found to produce discourse that manifests limited and non-native-like use of multiword expressions. One explanation for this is that adult L2 learners are relatively unsuccessful (in the absence of pedagogic intervention at transferring multiword expressions from input texts to their own output resources. The present article reports an exploratory study where ESL learners were asked to re-tell a short story which they had read and listened to twice. The learners’ re-tells were subsequently examined for the extent to which they recycled multiword expressions from the original story. To gauge the influence of the input text on these learners’ renderings of the story, a control group was asked to tell the story based exclusively on a series of pictures. The results of the experiment suggest that multiword expressions were recycled from the input text to some extent, but this stayed very marginal in real terms, especially in comparison with the recycling of single words. Moreover, when learners did borrow expressions from the input text, their reproductions were often non-target-like.
Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...
The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…
Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.
Full Text Available Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a function of consistency versus variation in the talker’s voice (talker condition and background noise (noise condition using a delayed recognition memory paradigm. The speech and noise signals were spectrally-separated, such that changes in a simultaneously presented non-speech signal (background noise from exposure to test would not be accompanied by concomitant changes in the target speech signal. The results revealed that listeners can encode both signal-intrinsic talker and signal-extrinsic noise information into integrated cognitive representations, critically even when the two auditory streams are spectrally non-overlapping. However, the extent to which extra-linguistic episodic information is encoded alongside linguistic information appears to be modulated by syllabic characteristics, with specificity effects found only for monosyllabic items. These findings suggest that encoding and retrieval of episodic information during spoken word processing may be modulated by lexical characteristics.
Watanabe, Shigeru; Yamamoto, Erico; Uozumi, Midori
Java sparrows (Padda oryzivora) were trained to discriminate English from Chinese spoken by a bilingual speaker. They could learn discrimination and showed generalization to new sentences spoken by the same speaker and those spoken by a new speaker. Thus, the birds distinguished between English and Chinese. Although auditory cues for the discrimination were not specified, this is the first evidence that non-mammalian species can discriminate human languages.
Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559
Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.
Arndt, Karen Barako; Schuele, C. Melanie
Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…
Casey, Laura Baylot; Bicard, David F.
Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…
Nava, Andrea; Pedrazzini, Luciana
We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…
Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra
-, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf
Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth
In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…
With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.
Wang, Zhen; Zechner, Klaus; Sun, Yu
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…
Moreno, Antonio; Limousin, Fanny; Dehaene, Stanislas; Pallier, Christophe
During sentence processing, areas of the left superior temporal sulcus, inferior frontal gyrus and left basal ganglia exhibit a systematic increase in brain activity as a function of constituent size, suggesting their involvement in the computation of syntactic and semantic structures. Here, we asked whether these areas play a universal role in language and therefore contribute to the processing of non-spoken sign language. Congenitally deaf adults who acquired French sign language as a first language and written French as a second language were scanned while watching sequences of signs in which the size of syntactic constituents was manipulated. An effect of constituent size was found in the basal ganglia, including the head of the caudate and the putamen. A smaller effect was also detected in temporal and frontal regions previously shown to be sensitive to constituent size in written language in hearing French subjects (Pallier et al., 2011). When the deaf participants read sentences versus word lists, the same network of language areas was observed. While reading and sign language processing yielded identical effects of linguistic structure in the basal ganglia, the effect of structure was stronger in all cortical language areas for written language relative to sign language. Furthermore, cortical activity was partially modulated by age of acquisition and reading proficiency. Our results stress the important role of the basal ganglia, within the language network, in the representation of the constituent structure of language, regardless of the input modality. Copyright © 2017 Elsevier Inc. All rights reserved.
Pfau, R.; Steinbach, M.; Pfau, R.; Steinbach, M.; Herrmann, A.
Sign language grammars, just like spoken language grammars, generally provide various means to generate different kinds of complex syntactic structures including subordination of complement clauses, adverbial clauses, or relative clauses. Studies on various sign languages have revealed that sign
Sibieta, Luke; Kotecha, Mehul; Skipp, Amy
The Nuffield Early Language Intervention is designed to improve the spoken language ability of children during the transition from nursery to primary school. It is targeted at children with relatively poor spoken language skills. Three sessions per week are delivered to groups of two to four children starting in the final term of nursery and…
Kaufman, David R; Sheehan, Barbara; Stetson, Peter; Bhatt, Ashish R; Field, Adele I; Patel, Chirag; Maisel, James Mark
The process of documentation in electronic health records (EHRs) is known to be time consuming, inefficient, and cumbersome. The use of dictation coupled with manual transcription has become an increasingly common practice. In recent years, natural language processing (NLP)-enabled data capture has become a viable alternative for data entry. It enables the clinician to maintain control of the process and potentially reduce the documentation burden. The question remains how this NLP-enabled workflow will impact EHR usability and whether it can meet the structured data and other EHR requirements while enhancing the user's experience. The objective of this study is evaluate the comparative effectiveness of an NLP-enabled data capture method using dictation and data extraction from transcribed documents (NLP Entry) in terms of documentation time, documentation quality, and usability versus standard EHR keyboard-and-mouse data entry. This formative study investigated the results of using 4 combinations of NLP Entry and Standard Entry methods ("protocols") of EHR data capture. We compared a novel dictation-based protocol using MediSapien NLP (NLP-NLP) for structured data capture against a standard structured data capture protocol (Standard-Standard) as well as 2 novel hybrid protocols (NLP-Standard and Standard-NLP). The 31 participants included neurologists, cardiologists, and nephrologists. Participants generated 4 consultation or admission notes using 4 documentation protocols. We recorded the time on task, documentation quality (using the Physician Documentation Quality Instrument, PDQI-9), and usability of the documentation processes. A total of 118 notes were documented across the 3 subject areas. The NLP-NLP protocol required a median of 5.2 minutes per cardiology note, 7.3 minutes per nephrology note, and 8.5 minutes per neurology note compared with 16.9, 20.7, and 21.2 minutes, respectively, using the Standard-Standard protocol and 13.8, 21.3, and 18.7 minutes
... on populations and the numbers of people speaking each language. Features include: * * * * * nearly 600 languages identiﬁed as to where they are spoken and the family to which they belong over 200 languages individually described, with sample passages and English translation fascinating insights into the history and development of individual languages a...
Kimmelman, V.; Pfau, R.; Féry, C.; Ishihara, S.
This chapter demonstrates that the Information Structure notions Topic and Focus are relevant for sign languages, just as they are for spoken languages. Data from various sign languages reveal that, across sign languages, Information Structure is encoded by syntactic and prosodic strategies, often
This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.
Sanden, Guro Refsum
Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...
Sherwood, Bruce Arne
Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)
Laursen, Helle Pia
and conceptualizations of language and literacy in research on (second) language acquisition. When examining children’s first language acquisition, spoken language has been the primary concern in scholarship: a child acquires oral language first and written language follows later, i.e. language precedes literacy....... On the other hand, many second or foreign language learners learn mostly through written language or learn spoken and written language at the same time. Thus the connections between spoken and written (and visual) modalities, i.e. between language and literacy, are complex in research on language acquisition......Moving conceptualizations of language and literacy in SLA In this colloquium, we aim to problematize the concepts of language and literacy in the field that is termed “second language” research and seek ways to critically connect the terms. When considering current day language use for example...
Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/
Socorro Cláudia Tavares de Sousa
Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.
Full Text Available According to the critical period hypothesis, the earlier the acquisition of a second language starts, the better. Owing to the plasticity of the brain, up until a certain age a second language can be acquired successfully according to this view. Early second language learners are commonly said to have an advantage over later ones especially in phonetic/phonological acquisition. Native-like pronunciation is said to be most likely to be achieved by young learners. However, there is evidence of accentfree speech in second languages learnt after puberty as well. Occasionally, on the other hand, a nonnative accent may appear even in early second (or third language acquisition. Cross-linguistic influences are natural in multilingual development, and we would expect the dominant language to have an impact on the weaker one(s. The dominant language is usually the one that provides the largest amount of input for the child. But is it always the amount that counts? Perhaps sometimes other factors, such as emotions, ome into play? In this paper, data obtained from an EnglishPersian-Hungarian trilingual pair of siblings (under age 4 and 3 respectively is analyzed, with a special focus on cross-linguistic influences at the phonetic/phonological levels. It will be shown that beyond the amount of input there are more important factors that trigger interference in multilingual development.
Book review. Neurolinguistics. An Introduction to Spoken Language Processing and its Disorders, John Ingram. Cambridge University Press, Cambridge (Cambridge Textbooks in Linguistics) (2007). xxi + 420 pp., ISBN 978-0-521-79640-8 (pb)
The present textbook is one of the few recent textbooks in the area of neurolinguistics and will be welcomed by teachers of neurolinguistic courses as well as researchers interested in the topic. Neurolinguistics is a huge area, and the boundaries between psycho- and neurolinguistics are not sharp. Often the term neurolinguistics is used to refer to research involving neuropsychological patients suffering from some sort of language disorder or impairment. Also, the term neuro- rather than psy...
González-Alvarez, Julio; Palomar-García, María-Angeles
Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.
Schuit, J.; Baker, A.; Pfau, R.
Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different
This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…
Cenoz, Jasone; Gorter, Durk
This paper focuses on the linguistic landscape of two streets in two multilingual cities in Friesland (Netherlands) and the Basque Country (Spain) where a minority language is spoken, Basque or Frisian. The paper analyses the use of the minority language (Basque or Frisian), the state language (Spanish or Dutch) and English as an international…
Full Text Available This study investigates international students’ perceptions of the issues they face using English as a second language while attending American higher education institutions. In order to fully understand those challenges involved in learning English as a Second Language, it is necessary to know the extent to which international students have mastered the English language before they start their study in America. Most international students experience an overload of English language input upon arrival in the United States. Cultural differences influence international students’ learning of English in other ways, including international students’ isolation within their communities and America’s lack of teaching listening skills to its own students. Other factors also affect international students’ learning of English, such as the many forms of informal English spoken in the USA, as well as a variety of dialects. Moreover, since most international students have learned English in an environment that precluded much contact with spoken English, they often speak English with an accent that reveals their own language. This study offers informed insight into the complicated process of simultaneously learning the language and culture of another country. Readers will find three main voices in addition to the international students who “speak” (in quotation marks throughout this article. Hong Li, a Chinese doctoral student in English Education at the University of Missouri-Columbia, authored the “regular” text. Second, Roy F. Fox’s voice appears in italics. Fox is Professor of English Education and Chair of the Department of Learning, Teaching, and Curriculum at the University of Missouri-Columbia. Third, Dario J. Almarza’s voice appears in boldface. Almarza, a native of Venezuela, is an Assistant Professor of Social Studies Education at the same institution.
Full Text Available Recent studies suggest that deaf children perform more poorly on working memory tasks compared to hearing children, but do not say whether this poorer performance arises directly from deafness itself or from deaf children’s reduced language exposure. The issue remains unresolved because findings come from (1 tasks that are verbal as opposed to non-verbal, and (2 involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities. A more direct test of how the type and quality of language exposure impacts working memory is to use measures of non-verbal working memory (NVWM and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n=27, deaf native users of British Sign Language (BSL; n=7, and deaf children non native signers (n=19. We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and if language tasks predicted scores on NVWM tasks. For the two NVWM tasks, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another. Multiple regression analysis revealed that the vocabulary measure predicted scores on NVWM tasks. Our results suggest that whatever the language modality – spoken or signed – rich language experience from birth, and the good language skills that result from this early age of aacquisition, play a critical role in the development of NVWM and in performance on NVWM
Jongbloed-Faber, L.; Van de Velde, H.; van der Meer, C.; Klinkenberg, E.L.
This paper explores the use of Frisian, a minority language spoken in the Dutch province of Fryslân, on social media by Frisian teenagers. Frisian is the mother tongue of 54% of the 650,000 inhabitants and is predominantly a spoken language: 64% of the Frisian population can speak it well, while
Callejas, Zoraida; Griol, David; López-Cózar, Ramón
In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.
Full Text Available Abstract In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.
Glenn-Applegate, Katherine; Breit-Smith, Allison; Justice, Laura M.; Piasta, Shayne B.
Research Findings: Artfulness is rarely considered as an indicator of quality in young children's spoken narratives. Although some studies have examined artfulness in the narratives of children 5 and older, no studies to date have focused on the artfulness of preschoolers' oral narratives. This study examined the artfulness of fictional spoken…
Iyiola Amos Damilare
Full Text Available Substitution is a phonological process in language. Existing studies have examined deletion in several languages and dialects with less attention paid to the spoken French of Ijebu Undergraduates. This article therefore examined substitution as a dominant phenomenon in the spoken French of thirty-four Ijebu Undergraduate French Learners (IUFLs in Selected Universities in South West of Nigeria with a view to establishing the dominance of substitution in the spoken French of IUFLs. The data collection was through tape-recording of participants’ production of 30 sentences containing both French vowel and consonant sounds. The results revealed inappropriate replacement of vowel and consonant in the medial and final positions in the spoken French of IUFLs.
Mst. Moriam, Quadir
This study discusses motivation and strategy use of university students to learn spoken English in Bangladesh. A group of 355 (187 males and 168 females) university students participated in this investigation. To measure learners' degree of motivation a modified version of questionnaire used by Schmidt et al. (1996) was administered. Participants reported their strategy use on a modified version of SILL, the Strategy Inventory for Language Learning, version 7.0 (Oxford, 1990). In order to fin...
Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina
The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....
Metsala, Jamie L.; Stavrinos, Despina; Walley, Amanda C.
This study examined effects of lexical factors on children's spoken word recognition across a 1-year time span, and contributions to phonological awareness and nonword repetition. Across the year, children identified words based on less input on a speech-gating task. For word repetition, older children improved for the most familiar words. There…
van Loon, E.; Pfau, R.; Steinbach, M.; Müller, C.; Cienki, A.; Fricke, E.; Ladewig, S.H.; McNeill, D.; Bressem, J.
Recent studies on grammaticalization in sign languages have shown that, for the most part, the grammaticalization paths identified in sign languages parallel those previously described for spoken languages. Hence, the general principles of grammaticalization do not depend on the modality of language
Zhang, Qingfang; Wang, Cheng
The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.
Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.
Wiseheart, Rebecca; Altmann, Lori J P
Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.
Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G
Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Kharlamov, Viktor; Campbell, Kenneth; Kazanina, Nina
Speech sounds are not always perceived in accordance with their acoustic-phonetic content. For example, an early and automatic process of perceptual repair, which ensures conformity of speech inputs to the listener's native language phonology, applies to individual input segments that do not exist in the native inventory or to sound sequences that are illicit according to the native phonotactic restrictions on sound co-occurrences. The present study with Russian and Canadian English speakers shows that listeners may perceive phonetically distinct and licit sound sequences as equivalent when the native language system provides robust evidence for mapping multiple phonetic forms onto a single phonological representation. In Russian, due to an optional but productive t-deletion process that affects /stn/ clusters, the surface forms [sn] and [stn] may be phonologically equivalent and map to a single phonological form /stn/. In contrast, [sn] and [stn] clusters are usually phonologically distinct in (Canadian) English. Behavioral data from identification and discrimination tasks indicated that [sn] and [stn] clusters were more confusable for Russian than for English speakers. The EEG experiment employed an oddball paradigm with nonwords [asna] and [astna] used as the standard and deviant stimuli. A reliable mismatch negativity response was elicited approximately 100 msec postchange in the English group but not in the Russian group. These findings point to a perceptual repair mechanism that is engaged automatically at a prelexical level to ensure immediate encoding of speech inputs in phonological terms, which in turn enables efficient access to the meaning of a spoken utterance.
Dehaene, Stanislas; Pegado, Felipe; Braga, Lucia W; Ventura, Paulo; Nunes Filho, Gilberto; Jobert, Antoinette; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Cohen, Laurent
Does literacy improve brain function? Does it also entail losses? Using functional magnetic resonance imaging, we measured brain responses to spoken and written language, visual faces, houses, tools, and checkers in adults of variable literacy (10 were illiterate, 22 became literate as adults, and 31 were literate in childhood). As literacy enhanced the left fusiform activation evoked by writing, it induced a small competition with faces at this location, but also broadly enhanced visual responses in fusiform and occipital cortex, extending to area V1. Literacy also enhanced phonological activation to speech in the planum temporale and afforded a top-down activation of orthography from spoken inputs. Most changes occurred even when literacy was acquired in adulthood, emphasizing that both childhood and adult education can profoundly refine cortical organization.
Rodriguez, Astrid Sussette
Developing literacy and language proficiency in English is essential to thrive in school and in the workforce in American society. Research on cross-linguistic influences on text-level skills is scant, especially studies investigating reading comprehension among language-minority adults. The present study investigated the effects of…
Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice
These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.
Kyle Tran Myhre
Full Text Available In "Dust," spoken word poet Kyle "Guante" Tran Myhre crafts a multi-vocal exploration of the connections between the internment of Japanese Americans during World War II and the current struggles against xenophobia in general and Islamophobia specifically. Weaving together personal narrative, quotes from multiple voices, and "verse journalism" (a term coined by Gwendolyn Brooks, the poem seeks to bridge past and present in order to inform a more just future.
Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary
Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…
This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…
Alimi, Modupe M.
Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…
Corina, David P; Lawyer, Laurel A; Cates, Deborah
Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language.
Full Text Available Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language,; core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language.
Williams, Colin H.
The Welsh language, which is indigenous to Wales, is one of six Celtic languages. It is spoken by 562,000 speakers, 19% of the population of Wales, according to the 2011 U.K. Census, and it is estimated that it is spoken by a further 200,000 residents elsewhere in the United Kingdom. No exact figures exist for the undoubted thousands of other…
Van Engen, Kristin J; Chandrasekaran, Bharath; Smiljanic, Rajka
Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.
Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process. This book examines how user models can be used to support such early evaluations in two ways: by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed. How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...
Full Text Available The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005 which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors. In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1. In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2 but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2, low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate
Stoop, Ruedi; Nüesch, Patrick; Stoop, Ralph Lukas; Bunimovich, Leonid A
Using a symbolic dynamics and a surrogate data approach, we show that the language exhibited by common fruit flies Drosophila ('D.') during courtship is as grammatically complex as the most complex human-spoken modern languages. This finding emerges from the study of fifty high-speed courtship videos (generally of several minutes duration) that were visually frame-by-frame dissected into 37 fundamental behavioral elements. From the symbolic dynamics of these elements, the courtship-generating language was determined with extreme confidence (significance level > 0.95). The languages categorization in terms of position in Chomsky's hierarchical language classification allows to compare Drosophila's body language not only with computer's compiler languages, but also with human-spoken languages. Drosophila's body language emerges to be at least as powerful as the languages spoken by humans.
A brief review of Indian education focuses on special problems caused by overcrowded schools, insufficient funding, and the status of education itself in the Indian social structure. Language instruction in India, a complex issue due largely to the numerous official languages currently spoken, is commented on with special reference to the problem…
Workshop at Arden House, February 23-26,1992. Francis Kubala , et al, "BBN BYBLOS and HARC February 1992 ATIS Benchmark Results", 5th DARPA Speech...8217, presented at ICASSP, 1992. Richard Schwartz, Steve Austin, Francis Kubala , John Makhoul, Long Nguyen, Paul Placeway; George Zavaliagkos, Northeastern...of the DARPA Common Lexicon Working Group at the 5th DARPA Speech & NL Workshop at Arden House, February 23-26,1992. Francis Kubala is chairing the
Full Text Available Speech Recognition (ASR) systems in the developing world is severely inhibited. Given that few task-specific corpora exist and speech technology systems perform poorly when deployed in a new environment, we investigate the use of acoustic model adaptation...
stutters , false starts, repairs, hesitations, filled pauses, and various other non-lexical acoustic events. Under these circumstances, it is not...sensible choice from a software engineering perspective. The case for separating out various task-independent aspects of the conversation has in fact been...in behav- ior both within and across systems. It also represents a more sensible solution from a software engi- The RavenClaw error handling
Gloria Avendaño de Barón
Full Text Available This article presents the results of a research project whose aims were the following: to determine the frequency of the use of pronoun forms in polite treatment sumercé, usted and tú, according to differences in gender, age and level of education, among speakers in Tunja; to describe the sociodiscursive variations and to explain the relationship between usage and courtesy. The methodology of the Project for the Sociolinguistic Study of Spanish in Spain and in Latin America (PRESEEA was used, and a sample of 54 speakers was taken. The results indicate that the most frequently used pronoun in Tunja to express friendliness and affection is sumercé, followed by usted and tú; women and men of different generations and levels of education alternate the use of these three forms in the context of narrative, descriptive, argumentative and explanatory speech.
André, Elisabeth; Rehm, Matthias; Minker, Wolfgang
While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic variations in order to improve the user’s perception of the interaction. To accomplish this goal, our approach integrates a social theory of polite...... of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available....
The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f
Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo
This paper addresses the problem of information and service accessibility in mobile devices with limited resources. A solution is developed and tested through a prototype that applies state-of-the-art Distributed Speech Recognition (DSR) and knowledge-based Information Retrieval (IR) processing...... for spoken query answering. For the DSR part, a configurable DSR system is implemented on the basis of the ETSI-DSR advanced front-end and the SPHINX IV recognizer. For the knowledge-based IR part, a distributed system solution is developed for fast retrieval of the most relevant documents, with a text...
Rankin, Tom; Unsworth, Sharon
A generative approach to language acquisition is no different from any other in assuming that target language input is crucial for language acquisition. This discussion note addresses the place of input in generative second language acquisition (SLA) research and the perception in the wider field of SLA research that generative SLA…
Campfield, Dorota E.; Murphy, Victoria A.
This paper reports on an intervention study with young Polish beginners (mean age: 8 years, 3 months) learning English at school. It seeks to identify whether exposure to rhythmic input improves knowledge of word order and function words. The "prosodic bootstrapping hypothesis", relevant in developmental psycholinguistics, provided the…
Han's (2009, 2013) selective fossilization hypothesis (SFH) claims that L1 markedness and L2 input robustness determine the fossilizability (and learnability) of an L2 feature. To test the validity of the model, a pseudo-longitudinal study was designed in which the errors in the argumentative essays of 52 Iranian EFL learners were identified and…
Ramos, Teresita V.; de Guzman, Videa
This language textbook is designed for beginning students of Tagalog, the principal language spoken on the island of Luzon in the Philippines. The introduction discusses the history of Tagalog and certain features of the language. An explanation of the text is given, along with notes for the teacher. The text itself is divided into nine sections:…
domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...Approaches for Language Identification in Mismatched Environments Shahan Nercessian, Pedro Torres-Carrasquillo, and Gabriel Martínez-Montes...consider the task of language identification in the context of mismatch conditions. Specifically, we address the issue of using unlabeled data in the
There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…
Van Houten, Lori J.
A longitudinal study of the language development of children of adolescent mothers followed 20 adolescent and 20 older mothers from their children's birth through three years of age. This report is based on data collected from a subsample of 20 mothers. Mother-child interactions in feeding, teaching, and play at eight months and two years were…
Sagarra, Nuria; Abbuhl, Rebekha
This study investigates whether practice with computer-administered feedback in the absence of meaning-focused interaction can help second language learners notice the corrective intent of recasts and develop linguistic accuracy. A group of 218 beginning Anglophone learners of Spanish received 1 of 4 types of automated feedback (no feedback,…
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Full Text Available The goal of this project in Estonia was to determine what languages are spoken by students from the 2nd to the 5th year of basic school at their homes in Tallinn, the capital of Estonia. At the same time, this problem was also studied in other segregated regions of Estonia: Kohtla-Järve and Maardu. According to the database of the population census from the year 2000 (Estonian Statistics Executive Office's census 2000, there are representatives of 142 ethnic groups living in Estonia, speaking a total of 109 native languages. At the same time, the database doesn’t state which languages are spoken at homes. The material presented in this article belongs to the research topic “Home Language of Basic School Students in Tallinn” from years 2007–2008, specifically financed and ordered by the Estonian Ministry of Education and Research (grant No. ETF 7065 in the framework of an international study called “Multilingual Project”. It was determined what language is dominating in everyday use, what are the factors for choosing the language for communication, what are the preferred languages and language skills. This study reflects the actual trends of the language situation in these cities.
Marshall, Chloë; Jones, Anna; Denmark, Tanya; Mason, Kathryn; Atkinson, Joanna; Botting, Nicola; Morgan, Gary
Several recent studies have suggested that deaf children perform more poorly on working memory tasks compared to hearing children, but these studies have not been able to determine whether this poorer performance arises directly from deafness itself or from deaf children's reduced language exposure. The issue remains unresolved because findings come mostly from (1) tasks that are verbal as opposed to non-verbal, and (2) involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities within a normal developmental timeframe for language acquisition). A more direct, and therefore stronger, test of the hypothesis that the type and quality of language exposure impact working memory is to use measures of non-verbal working memory (NVWM) and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition and reduced quality of language input compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6–11 years: hearing children (n = 28), deaf children who were native users of British Sign Language (BSL; n = 8), and deaf children who used BSL but who were not native signers (n = 19). We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and whether scores on language tasks predicted scores on NVWM tasks. For the two executive-loaded NVWM tasks included in our battery, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another). Multiple regression analysis revealed that scores on the vocabulary
Marshall, Chloë; Jones, Anna; Denmark, Tanya; Mason, Kathryn; Atkinson, Joanna; Botting, Nicola; Morgan, Gary
Several recent studies have suggested that deaf children perform more poorly on working memory tasks compared to hearing children, but these studies have not been able to determine whether this poorer performance arises directly from deafness itself or from deaf children's reduced language exposure. The issue remains unresolved because findings come mostly from (1) tasks that are verbal as opposed to non-verbal, and (2) involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities within a normal developmental timeframe for language acquisition). A more direct, and therefore stronger, test of the hypothesis that the type and quality of language exposure impact working memory is to use measures of non-verbal working memory (NVWM) and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition and reduced quality of language input compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n = 28), deaf children who were native users of British Sign Language (BSL; n = 8), and deaf children who used BSL but who were not native signers (n = 19). We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and whether scores on language tasks predicted scores on NVWM tasks. For the two executive-loaded NVWM tasks included in our battery, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another). Multiple regression analysis revealed that scores on the vocabulary measure
Lobel, Jason William; Paputungan, Ade Tatak
This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…
This paper looks at the degree and way in which lesser-used languages are used as expressions of identity, focusing specifically on two of Europe's lesser-used languages. The first is Irish, spoken in the Republic of Ireland and the second is Galician, spoken in the Autonomous Community of Galicia in the North-western part of Spain. The paper…
Dekker, Diane; Young, Catherine
There are more than 6000 languages spoken by the 6 billion people in the world today--however, those languages are not evenly divided among the world's population--over 90% of people globally speak only about 300 majority languages--the remaining 5700 languages being termed "minority languages". These languages represent the…
Roy-Campbell, Zaline M.
English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…
This work contains the first comprehensive description of Abui, a language of the Trans New Guinea family spoken approximately by 16,000 speakers in the central part of the Alor Island in Eastern Indonesia. The description focuses on the northern dialect of Abui as spoken in the village
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,
Carter, Ronald; McCarthy, Michael
This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…
Inegbeboh, Bridget O.
Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
The study of language knowledge guided by a purely biological perspective prioritizes the study of syntax. The essential process of syntax is recursion--the ability to generate an infinite array of expressions from a limited set of elements. Researchers working within the biological perspective argue that this ability is possible only because of an innately specified genetic makeup that is specific to human beings. Such a view of language knowledge may be fully justified in discussions on biolinguistics, and in evolutionary biology. However, it is grossly inadequate in understanding language-learning problems, particularly those experienced by children with neurodevelopmental disorders such as developmental dyslexia, Williams syndrome, specific language impairment and autism spectrum disorders. Specifically, syntax-centered definitions of language knowledge completely ignore certain crucial aspects of language learning and use, namely, that language is embedded in a social context; that the role of envrironmental triggering as a learning mechanism is grossly underestimated; that a considerable extent of visuo-spatial information accompanies speech in day-to-day communication; that the developmental process itself lies at the heart of knowledge acquisition; and that there is a tremendous variation in the orthographic systems associated with different languages. All these (socio-cultural) factors can influence the rate and quality of spoken and written language acquisition resulting in much variation in phenotypes associated with disorders known to have a genetic component. Delineation of such phenotypic variability requires inputs from varied disciplines such as neurobiology, neuropsychology, linguistics and communication disorders. In this paper, I discuss published research that questions cognitive modularity and emphasises the role of the environment for understanding linguistic capabilities of children with neuro-developmental disorders. The discussion pertains
de Groot, A.M.B.; Filipović, L.; Pütz, M.
The linguistic expressions of the majority of bilinguals exhibit deviations from the corresponding expressions of monolinguals in phonology, grammar, and semantics, and in both languages. In addition, bilinguals may process spoken and written language differently from monolinguals. Two possible
Pfau, R.; Steinbach, M.
Studies on sign language grammaticalization have demonstrated that most of the attested diachronic changes from lexical to functional element parallel those previously described for spoken languages. To date, most of these studies are either descriptive in nature or embedded within
Juan Manuel Montero
Full Text Available We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO. In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA. NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users’ frustrations and, ultimately, improving their satisfaction.
Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel
We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction.
Lestari, Dessi Puji; Furui, Sadaoki
Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.
Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina
This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…
Pfau, R.; Steinbach, M.; Herrmann, A.
Since natural languages exist in two different modalities - the visual-gestural modality of sign languages and the auditory-oral modality of spoken languages - it is obvious that all fields of research in modern linguistics will benefit from research on sign languages. Although previous studies have
Poetry in a sign language can make use of literary devices just as poetry in a spoken language can. The study of literary expression in sign languages has increased over the last twenty years and for South African Sign Language (SASL) such literary texts have also become more available. This article gives a brief overview ...
Language policy and speech practice in Cape Town: An exploratory public health sector study. Michellene Williams, Simon Bekker. Abstract. Public language policy in South Africa recognises 11 official spoken languages. In Cape Town, and in the Western Cape, three of these eleven languages have been selected for ...
Bédard, Pascale; Audet, Anne-Marie; Drouin, Patrick; Roy, Johanna-Pascale; Rivard, Julie; Tremblay, Pascale
Sublexical phonotactic regularities in language have a major impact on language development, as well as on speech processing and production throughout the entire lifespan. To understand the impact of phonotactic regularities on speech and language functions at the behavioral and neural levels, it is essential to have access to oral language corpora to study these complex phenomena in different languages. Yet, probably because of their complexity, oral language corpora remain less common than written language corpora. This article presents the first corpus and database of spoken Quebec French syllables and phones: SyllabO+. This corpus contains phonetic transcriptions of over 300,000 syllables (over 690,000 phones) extracted from recordings of 184 healthy adult native Quebec French speakers, ranging in age from 20 to 97 years. To ensure the representativeness of the corpus, these recordings were made in both formal and familiar communication contexts. Phonotactic distributional statistics (e.g., syllable and co-occurrence frequencies, percentages, percentile ranks, transition probabilities, and pointwise mutual information) were computed from the corpus. An open-access online application to search the database was developed, and is available at www.speechneurolab.ca/syllabo . In this article, we present a brief overview of the corpus, as well as the syllable and phone databases, and we discuss their practical applications in various fields of research, including cognitive neuroscience, psycholinguistics, neurolinguistics, experimental psychology, phonetics, and phonology. Nonacademic practical applications are also discussed, including uses in speech-language pathology.
In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...
Larsen, Lars Bo
This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...... model roughly explains 50% of the observed variance in the user satisfaction based on measures of task success and speech recognition accuracy, a result similar to those obtained at AT&T. The applied methods are discussed and evaluated critically....
Corneli, Joseph; Corneli, Miriam
"Natural Language," whether spoken and attended to by humans, or processed and generated by computers, requires networked structures that reflect creative processes in semantic, syntactic, phonetic, linguistic, social, emotional, and cultural modules. Being able to produce novel and useful behavior following repeated practice gets to the root of both artificial intelligence and human language. This paper investigates the modalities involved in language-like applications that computers -- and ...
Evers, A.; van Kampen, N.J.|info:eu-repo/dai/nl/126439737
The language acquisition procedure identifies certain properties of the target grammar before others. The evidence from the input is processed in a stepwise order. Section 1 equates that order and its typical effects with an order of parameter setting. The question is how the acquisition procedure
Trudeau, Natacha; Sutton, Ann; Morford, Jill P
While research on spoken language has a long tradition of studying and contrasting language production and comprehension, the study of graphic symbol communication has focused more on production than comprehension. As a result, the relationships between the ability to construct and to interpret graphic symbol sequences are not well understood. This study explored the use of graphic symbol sequences in children without disabilities aged 3;0 to 6;11 (years; months) (n=111). Children took part in nine tasks that systematically varied input and output modalities (speech, action, and graphic symbols). Results show that in 3- and 4-year-olds, attributing meaning to a sequence of symbols was particularly difficult even when the children knew the meaning of each symbol in the sequence. Similarly, while even 3- and 4-year-olds could produce a graphic symbol sequence following a model, transposing a spoken sentence into a graphic sequence was more difficult for them. Representing an action with graphic symbols was difficult even for 5-year-olds. Finally, the ability to comprehend graphic-symbol sequences preceded the ability to produce them. These developmental patterns, as well as memory-related variables, should be taken into account in choosing intervention strategies with young children who use AAC.
Full Text Available The article gives new evidence about the adverb as a part of the grammatical system of the Ukrainian steppe dialect spread in the area between the Danube and the Dniester rivers. The author proves that the grammatical system of the dialect spoken in the v. Shevchenkove, Kiliya district, Odessa region is determined by the historical development of the Ukrainian language rather than the influence of neighboring dialects.
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta
Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.
Lauren B. Collister
Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.
Van Heerden, C
Full Text Available speech recognisers for a diverse multitude of languages. The paper investigates the feasibility of developing small-vocabulary speaker-independent ASR systems designed for use in a telephone-based information system, using ten resource-scarce languages...
Lillo-Martin, Diane C; Gajewski, Jon
Linguistic research has identified abstract properties that seem to be shared by all languages-such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality but also in the form of sign languages has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language-in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression. WIREs Cogn Sci 2014, 5:387-401. doi: 10.1002/wcs.1297 This article is categorized under: Linguistics > Linguistic Theory. © 2014 The Authors. WIREs Cognitive Science published by John Wiley & Sons, Ltd.
Freed, Jenny; Adams, Catherine; Lockton, Elaine
Children who have pragmatic language impairment (CwPLI) have difficulties with the use of language in social contexts and show impairments in above-sentence level language tasks. Previous studies have found that typically developing children's reading comprehension (RC) is predicted by reading accuracy and spoken sentence level comprehension (SLC). This study explores the predictive ability of these factors and above-sentence level comprehension (ASLC) on RC skills in a group of CwPLI. Sixty nine primary school-aged CwPLI completed a measure of RC along with measures of reading accuracy, spoken SLC and both visual (pictorially presented) and spoken ASLC tasks. Regression analyses showed that reading accuracy was the strongest predictor of RC. Visual ASLC did not explain unique variance in RC on top of spoken SLC. In contrast, a measure of spoken ASLC explained unique variance in RC, independent from that explained by spoken SLC. A regression model with nonverbal intelligence, reading accuracy, spoken SLC and spoken ASLC as predictors explained 74.2% of the variance in RC. Findings suggest that spoken ASLC may measure additional factors that are important for RC success in CwPLI and should be included in routine assessments for language and literacy learning in this group. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon
Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282
Caselli, Naomi K; Pyers, Jennie E
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Ingvalson, Erin M.; Wong, Patrick C. M.
Cochlear implants (CI) have brought with them hearing ability for many prelingually deafened children. Advances in CI technology have brought not only hearing ability but speech perception to these same children. Concurrent with the development of speech perception has come spoken language development, and one goal now is that prelingually deafened CI recipient children will develop spoken language capabilities on par with those of normal hearing (NH) children. This goal has not been met pure...
Johannessen, Janne Bondi; Salmons, Joseph C.; Westergaard, Marit; Anderssen, Merete; Arnbjörnsdóttir, Birna; Allen, Brent; Pierce, Marc; Boas, Hans C.; Roesch, Karen; Brown, Joshua R.; Putnam, Michael; Åfarli, Tor A.; Newman, Zelda Kahan; Annear, Lucas; Speth, Kristin
This book presents new empirical findings about Germanic heritage varieties spoken in North America: Dutch, German, Pennsylvania Dutch, Icelandic, Norwegian, Swedish, West Frisian and Yiddish, and varieties of English spoken both by heritage speakers and in communities after language shift. The volume focuses on three critical issues underlying the notion of ‘heritage language’: acquisition, attrition and change. The book offers theoretically-informed discussions of heritage language processe...
Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Full Text Available As an experienced teacher of advanced learners of English I am deeply aware of recurrent problems which these learners experience as regards grammatical accuracy. In this paper, I focus on researching inaccuracies in the use of verbal categories. I draw the data from a spoken learner corpus LINDSEI_CZ and analyze the performance of 50 advanced (C1–C2 learners of English whose mother tongue is Czech. The main method used is Computer-aided Error Analysis within the larger framework of Learner Corpus Research. The results reveal that the key area of difficulty is the use of tenses and tense agreements, and especially the use of the present perfect. Other error-prone aspects are also described. The study also identifies a number of triggers which may lie at the root of the problems. The identification of these triggers reveals deficiencies in the teaching of grammar, mainly too much focus on decontextualized practice, use of potentially confusing rules, and the lack of attempt to deal with broader notions such as continuity and perfectiveness. Whilst the study is useful for the teachers of advanced learners, its pedagogical implications stretch to lower levels of proficiency as well.
Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593
James S. Magnuson
Full Text Available Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.
There are conflicting claims among scholars on whether the structural outputs of the types of English spoken in countries where English is used as a second language gives such speech forms the status of varieties of English. This study examined those morphological features considered to be marked features of the variety spoken in Nigeria according to Kirkpatrick (2011) and the variety spoken in Malaysia by considering the claims of the Missing Surface Inflection Hypothesis (MSIH) a Second Lan...
South Africa have been affected by the policies of apartheid, and its educational and linguistic consequences, in a .... teaching strategies, and more recently of the perception that a signed language is a manual form ... of color and on the basis of the (former) official spoken languages designated by the apartheid education ...
In Africa there are a number of languages spoken, some of which have their own indigenous scripts that are used for writing. In this paper we assess these languages and present an in-depth script analysis for the Amharic writing system, one of the well-known indigenous scripts of Africa. Amharic is the official and working ...
THE PRESENT STUDY IS A TRANSLATION OF THE WORK "STROI ARABSKOGO YAZYKA" BY THE EMINENT RUSSIAN LINGUIST AND SEMITICS SCHOLAR, N.Y. YUSHMANOV. IT DEALS CONCISELY WITH THE POSITION OF ARABIC AMONG THE SEMITIC LANGUAGES AND THE RELATION OF THE LITERARY (CLASSICAL) LANGUAGE TO THE VARIOUS MODERN SPOKEN DIALECTS, AND PRESENTS A CONDENSED BUT…
bilingualism and extensive translation. May it be noted that all the languages of the world. (7000 in number (cf. Akinlabi and Connell, 2007)) cannot be spoken even skeletally by any individual. Therefore the multilingualism proposed by Crystal will have to favor only a few world languages. Toolan's extensive bilingualism is ...
This paper surveys some of the changes in teaching the four language skills in the past 15 years. It focuses on two main changes for each skill: understanding spoken language and willingness to communicate for speaking; product, process, and genre approaches and a focus on feedback for writing; extensive reading and literature for reading; and…
Alotaibi, Yousef Ajami; Hussain, Amir
Arabic is one of the world's oldest languages and is currently the second most spoken language in terms of number of speakers. However, it has not received much attention from the traditional speech processing research community. This study is specifically concerned with the analysis of vowels in modern standard Arabic dialect. The first and second formant values in these vowels are investigated and the differences and similarities between the vowels are explored using consonant-vowels-consonant (CVC) utterances. For this purpose, an HMM based recognizer was built to classify the vowels and the performance of the recognizer analyzed to help understand the similarities and dissimilarities between the phonetic features of vowels. The vowels are also analyzed in both time and frequency domains, and the consistent findings of the analysis are expected to facilitate future Arabic speech processing tasks such as vowel and speech recognition and classification.
Nomikou, Iris; Koke, Monique; Rohlfing, Katharina J.
In embodied theories on language, it is widely accepted that experience in acting generates an expectation of this action when hearing the word for it. However, how this expectation emerges during language acquisition is still not well understood. Assuming that the intermodal presentation of information facilitates perception, prior research had suggested that early in infancy, mothers perform their actions in temporal synchrony with language. Further research revealed that this synchrony is a form of multimodal responsive behavior related to the child’s later language development. Expanding on these findings, this article explores the relationship between action–language synchrony and the acquisition of verbs. Using qualitative and quantitative methods, we analyzed the coordination of verbs and action in mothers’ input to six-month-old infants and related these maternal strategies to the infants’ later production of verbs. We found that the verbs used by mothers in these early interactions were tightly coordinated with the ongoing action and very frequently responsive to infant actions. It is concluded that use of these multimodal strategies could significantly predict the number of spoken verbs in infants’ vocabulary at 24 months. PMID:28468265
Nomikou, Iris; Koke, Monique; Rohlfing, Katharina J
In embodied theories on language, it is widely accepted that experience in acting generates an expectation of this action when hearing the word for it. However, how this expectation emerges during language acquisition is still not well understood. Assuming that the intermodal presentation of information facilitates perception, prior research had suggested that early in infancy, mothers perform their actions in temporal synchrony with language. Further research revealed that this synchrony is a form of multimodal responsive behavior related to the child's later language development. Expanding on these findings, this article explores the relationship between action-language synchrony and the acquisition of verbs. Using qualitative and quantitative methods, we analyzed the coordination of verbs and action in mothers' input to six-month-old infants and related these maternal strategies to the infants' later production of verbs. We found that the verbs used by mothers in these early interactions were tightly coordinated with the ongoing action and very frequently responsive to infant actions. It is concluded that use of these multimodal strategies could significantly predict the number of spoken verbs in infants' vocabulary at 24 months.
Full Text Available In embodied theories on language, it is widely accepted that experience in acting generates an expectation of this action when hearing the word for it. However, how this expectation emerges during language acquisition is still not well understood. Assuming that the intermodal presentation of information facilitates perception, prior research had suggested that early in infancy, mothers perform their actions in temporal synchrony with language. Further research revealed that this synchrony is a form of multimodal responsive behavior related to the child’s later language development. Expanding on these findings, this article explores the relationship between action–language synchrony and the acquisition of verbs. Using qualitative and quantitative methods, we analyzed the coordination of verbs and action in mothers’ input to six-month-old infants and related these maternal strategies to the infants’ later production of verbs. We found that the verbs used by mothers in these early interactions were tightly coordinated with the ongoing action and very frequently responsive to infant actions. It is concluded that use of these multimodal strategies could significantly predict the number of spoken verbs in infants’ vocabulary at 24 months.
This study examines the effect of input modality (video, audio, and captions, i.e., on-screen text in the same language as audio) on (a) the learning of written and aural word forms, (b) overall vocabulary gains, (c) attention to input, and (d) vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian…
Schillingmann, Lars; Ernst, Jessica; Keite, Verena; Wrede, Britta; Meyer, Antje S; Belke, Eva
In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool's performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.
Shuai, Lan; Malins, Jeffrey G
Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.
Stern, Alissa Joy
For a language to survive, it must be spoken and passed down to the next generation. But how can we engage teenagers--so crucial for language transmission--to use and value their local tongue when they are bombarded by pressures from outside and from within their society to only speak national and international languages? This paper analyses the…
Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…
Ling, Bernadette; Kettle, Margaret
In second language classrooms, listening is gaining recognition as an active element in the processes of learning and using a second language. Currently, however, much of the teaching of listening prioritises comprehension without sufficient emphasis on the skills and strategies that enhance learners' understanding of spoken language. This paper…
In order to preserve distinctive cultures, people anxiously figure out writing systems of their languages as recording tools. Mandarin, Taiwanese and Hakka languages are three major and the most popular dialects of Han languages spoken in Chinese society. Their writing systems are all in Han characters. Various and independent phonetic…
Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy
Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…
Bird, Sonya; Kell, Sarah
Most Indigenous language revitalization programs in Canada currently emphasize spoken language. However, virtually no research has been done on the role of pronunciation in the context of language revitalization. This study set out to gain an understanding of attitudes around pronunciation in the SENCOTEN-speaking community, in order to determine…
Puskás, Tünde; Björk-Willén, Polly
This article explores dilemmatic aspects of language policies in a preschool group in which three languages (Swedish, Romani and Arabic) are spoken on an everyday basis. The article highlights the interplay between policy decisions on the societal level, the teachers' interpretations of these policies, as well as language practices on the micro…
a case for the existence of a Kiswahili sign language since KSL is a natural language with its own autonomous grammar distinct from that of any spoken language. In this paper, we shall argue that the Kiswahili mouthed KSL signs are an outcome of contact between KSL – Kiswahili bilinguals and their hearing Kiswahili ...
Byram, Michael; Wagner, Manuela
Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…
The Ndebele language corpus described here is that compiled by the ALLEX Project (now ALRI) at the University of Zimbabwe. It is intended to reflect as much as possible the Ndebele language as spoken in Zimbabwe. The Ndebele language corpus was built in order to provide much-needed material for the study of the ...
A proposal to transform Spanish into a universal language because it possesses the prerequisites: it is a living language, spoken in several countries; it is a natural language; and it uses the ordinary alphabet. Details on simplification and standardization are given. (Text is in Spanish.) (AMH)
Newman, Aaron J.; Supalla, Ted; Hauser, Peter; Newport, Elissa; Bavelier, Daphne
Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages, but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices, as well as the basal ganglia, medial frontal and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages. PMID:20347996
Shtyrov, Yury; Smith, Marie L; Horner, Aidan J; Henson, Richard; Nathan, Pradeep J; Bullmore, Edward T; Pulvermüller, Friedemann
Previous research indicates that, under explicit instructions to listen to spoken stimuli or in speech-oriented behavioural tasks, the brain's responses to senseless pseudowords are larger than those to meaningful words; the reverse is true in non-attended conditions. These differential responses could be used as a tool to trace linguistic processes in the brain and their interaction with attention. However, as previous studies relied on explicit instructions to attend or ignore the stimuli, a technique for automatic attention modulation (i.e., not dependent on explicit instruction) would be more advantageous, especially when cooperation with instructions may not be guaranteed (e.g., neurological patients, children etc). Here we present a novel paradigm in which the stimulus context automatically draws attention to speech. In a non-attend passive auditory oddball sequence, rare words and pseudowords were presented among frequent non-speech tones of variable frequency and length. The low percentage of spoken stimuli guarantees an involuntary attention switch to them. The speech stimuli, in turn, could be disambiguated as words or pseudowords only in their end, at the last phoneme, after the attention switch would have already occurred. Our results confirmed that this paradigm can indeed be used to induce automatic shifts of attention to spoken input. At ~250ms after the stimulus onset, a P3a-like neuromagnetic deflection was registered to spoken (but not tone) stimuli indicating an involuntary attention shift. Later, after the word-pseudoword divergence point, we found a larger oddball response to pseudowords than words, best explained by neural processes of lexical search facilitated through increased attention. Furthermore, we demonstrate a breakdown of this orderly pattern of neurocognitive processes as a result of sleep deprivation. The new paradigm may thus be an efficient way to assess language comprehension processes and their dynamic interaction with those
Farfan, Jose Antonio Flores
Even though Nahuatl is the most widely spoken indigenous language in Mexico, it is endangered. Threats include poor support for Nahuatl-speaking communities, migration of Nahuatl speakers to cities where English and Spanish are spoken, prejudicial attitudes toward indigenous languages, lack of contact between small communities of different…
extent of the emphasis on the acquisition vocabulary in school curricula. After a brief introduction, the author looks in chapter 2 at major books which in the. 20th century worked on a controlled vocabulary for foreign-language learners in Europe, Asia and America. This section provides the background for the elaboration of ...
Gabriele Stein. Developing Your English Vocabulary: A Systematic New. Approach. 2002, VIII + 272 pp. ... objective of this book is twofold: to compile a lexical core and to maximise the skills of language students by ... chapter 3, she offers twelve major ways of expanding this core-word list and differentiating lexical items to ...
data of the corpus and includes more formal audio material (lectures, TV and radio broadcasting). The book begins with a 20-page introduction, which is sometimes quite technical, but ... grounds words that belong to the core vocabulary of the language such as tool-. Lexikos 15 (AFRILEX-reeks/series 15: 2005): 338-339 ...
This dissertation is a descriptive grammar of Kove, an Austronesian language spoken in the West New Britain Province of Papua New Guinea. Kove is primarily spoken in 18 villages, including some on the small islands north of New Britain. There are about 9,000 people living in the area, but many are not fluent speakers of Kove. The dissertation…
Dekel, N.; Brosh, H.
This paper describes from a linguistic point of view the impact of the Hebrew spoken in Israel on the Arabic spoken natively by Israeli Arabs. Two main conditions enable mutual influences between Hebrew and Arabic in Israel: - The existence of two large groups of people speaking both languages
Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.
Perniss, P.M.; Zwitserlood, I.E.P.; Özyürek, A.
The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature
Blom, W.B.T.; van Dijk, Chantal; Vasic, Nada; van Witteloostuijn, Merel; Avrutin, S.
The purpose of this study was to investigate texting and textese, which is the special register used for sending brief text messages, across children with typical development (TD) and children with Specific Language Impairment (SLI). Using elicitation techniques, texting and spoken language messages
bilingualism in the natural sign language and the dominant spoken language of the society. Students would study not only the common curriculum shared with their hearing peers, but would also study the history of the Deaf culture and Deaf communities in other parts of the world. Thus, the goal of such a programme would ...
Johnston, Trevor; van Roekel, Jane; Schembri, Adam
This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.
Sherriah, André C; Devonish, Hubert; Thomas, Ewart A C; Creanza, Nicole
Creole languages are formed in conditions where speakers from distinct languages are brought together without a shared first language, typically under the domination of speakers from one of the languages and particularly in the context of the transatlantic slave trade and European colonialism. One such Creole in Suriname, Sranan, developed around the mid-seventeenth century, primarily out of contact between varieties of English from England, spoken by the dominant group, and multiple West African languages. The vast majority of the basic words in Sranan come from the language of the dominant group, English. Here, we compare linguistic features of modern-day Sranan with those of English as spoken in 313 localities across England. By way of testing proposed hypotheses for the origin of English words in Sranan, we find that 80% of the studied features of Sranan can be explained by similarity to regional dialect features at two distinct input locations within England, a cluster of locations near the port of Bristol and another cluster near Essex in eastern England. Our new hypothesis is supported by the geographical distribution of specific regional dialect features, such as post-vocalic rhoticity and word-initial 'h', and by phylogenetic analysis of these features, which shows evidence favouring input from at least two English dialects in the formation of Sranan. In addition to explicating the dialect features most prominent in the linguistic evolution of Sranan, our historical analyses also provide supporting evidence for two distinct hypotheses about the likely geographical origins of the English speakers whose language was an input to Sranan. The emergence as a likely input to Sranan of the speech forms of a cluster near Bristol is consistent with historical records, indicating that most of the indentured servants going to the Americas between 1654 and 1666 were from Bristol and nearby counties, and that of the cluster near Essex is consistent with documents
Hirschberg, Julia; Manning, Christopher D
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.
Harrison, Michelle Anne
This article examines the current situation of regional language bilingual primary education in Alsace and contends that the regional language presents a special case in the context of France. The language comprises two varieties: Alsatian, which traditionally has been widely spoken, and Standard German, used as the language of reference and…
Posel, Dorrit; Zeller, Jochen
In the post-apartheid era, South Africa has adopted a language policy that gives official status to 11 languages (English, Afrikaans, and nine Bantu languages). However, English has remained the dominant language of business, public office, and education, and some research suggests that English is increasingly being spoken in domestic settings.…
Lagos, Cristián; Espinoza, Marco; Rojas, Darío
In this paper, we analyse the cultural models (or folk theory of language) that the Mapuche intellectual elite have about Mapudungun, the native language of the Mapuche people still spoken today in Chile as the major minority language. Our theoretical frame is folk linguistics and studies of language ideology, but we have also taken an applied…
de Godev, Concepcion B.
The need to underline the relationship between spoken and written language in second language instruction is discussed, and the use of student dialogue journals to accomplish this is encouraged. The first section offers an overview of the whole language approach, which emphasizes integration of language skills. The second section examines briefly…
Baïdak, Nathalie; Balcon, Marie-Pascale; Motiejunaite, Akvile
Linguistic diversity is part of Europe's DNA. It embraces not only the official languages of Member States, but also the regional and/or minority languages spoken for centuries on European territory, as well as the languages brought by the various waves of migrants. The coexistence of this variety of languages constitutes an asset, but it is also…
Roxbury, Tracy; McMahon, Katie; Copland, David A
Evidence for the brain mechanisms recruited when processing concrete versus abstract concepts has been largely derived from studies employing visual stimuli. The tasks and baseline contrasts used have also involved varying degrees of lexical processing. This study investigated the neural basis of the concreteness effect during spoken word recognition and employed a lexical decision task with a novel pseudoword condition. The participants were seventeen healthy young adults (9 females). The stimuli consisted of (a) concrete, high imageability nouns, (b) abstract, low imageability nouns and (c) opaque legal pseudowords presented in a pseudorandomised, event-related design. Activation for the concrete, abstract and pseudoword conditions was analysed using anatomical regions of interest derived from previous findings of concrete and abstract word processing. Behaviourally, lexical decision reaction times for the concrete condition were significantly faster than both abstract and pseudoword conditions and the abstract condition was significantly faster than the pseudoword condition (p word recognition. Significant activity was also elicited by concrete words relative to pseudowords in the left fusiform and left anterior middle temporal gyrus. These findings confirm the involvement of a widely distributed network of brain regions that are activated in response to the spoken recognition of concrete but not abstract words. Our findings are consistent with the proposal that distinct brain regions are engaged as convergence zones and enable the binding of supramodal input.
Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.
Full Text Available Th e characterization of metonymy as a conceptual tool for guiding inferencing in language has opened a new fi eld of study in cognitive linguistics and pragmatics. To appreciate the value of metonymy for pragmatic inferencing, metonymy should not be viewed as performing only its prototypical referential function. Metonymic mappings are operative in speech acts at the level of reference, predication, proposition and illocution. Th e aim of this paper is to study the role of metonymy in pragmatic inferencing in spoken discourse in televison interviews. Case analyses of authentic utterances classifi ed as illocutionary metonymies following the pragmatic typology of metonymic functions are presented. Th e inferencing processes are facilitated by metonymic connections existing between domains or subdomains in the same functional domain. It has been widely accepted by cognitive linguists that universal human knowledge and embodiment are essential for the interpretation of metonymy. Th is analysis points to the role of cultural background knowledge in understanding target meanings. All these aspects of metonymic connections are exploited in complex inferential processes in spoken discourse. In most cases, metaphoric mappings are also a part of utterance interpretation.
Holmberg, Anders; Rijkhoff, Jan
The Germanic branch of Indo-European consists of three main groups (Ruhlen 1987: 327):- East Germanic: Gothic, Vandalic, Burgundian (all extinct);- North Germanic (or: Scandinavian): Runic (extinct), Danish, Swedish, Norwegian, Icelandic, Faroese;- West Germanic: German, Yiddish, Luxembourgeois......, Dutch, Afrikaans, Frisian, English.Here we will only consider the languages that are currently spoken in geographical Europe. Thus Afrikaans, which is spoken in South Africa, and the extinct languages Gothic, Vandalic, Burgundian, and Runic will not be taken into account (but see e.g. König & van der...
Bonin, Patrick; Chalard, Marylène; Méot, Alain; Fayol, Michel
The influence of nine variables on the latencies to write down or to speak aloud the names of pictures taken from Snodgrass and Vanderwart (1980) was investigated in French adults. The major determinants of both written and spoken picture naming latencies were image variability, image agreement and age of acquisition. To a lesser extent, name agreement was also found to have an impact in both production modes. The implications of the findings for theoretical views of both spoken and written picture naming are discussed.
Tajeddin, Zia; Pezeshki, Maryam
Although politeness markers are frequently used in written and spoken communication, pragmatic studies have not sufficiently explored the instruction of such markers to English as a foreign language (EFL) learners who lack sufficient opportunity to communicate with native speakers to acquire them in the context of use. Ignoring politeness as a…
Hall, Matthew L.; Ferreira, Victor S.; Mayberry, Rachel I.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic primin...
Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.
Full Text Available Discourse markers are a collection of one-word or multiword terms that help language users organize their utterances on the grammar, semantic, pragmatic and interactional levels. Researchers have characterized some of their roles in written and spoken discourse (Halliday & Hasan, 1976, Schffrin, 1988, 2001. Following this trend, this paper advances a discussion of discourse markers in contemporary academic spoken English. Through quantitative and qualitative analyses of the use of the discourse marker ‘you know’ in the Michigan Corpus of Academic Spoken English (MICASE we describe its frequency in this corpus, its collocation on the sentence level and its interactional functions. Grammatically, a concordance analysis shows that you know (as other discourse markers is linguistically fl exible as it seems to be placed in any grammatical slot of an utterance. Interactionally, a qualitative analysis indicates that its use in contemporary English goes beyond the uses described in the literature. We defend that besides serving as a hedging strategy (Lakoff, 1975, you know also serves as a powerful face-saving (Goffman, 1955 technique which constructs students’ identities vis-à-vis their professors’ and vice-versa.
Langus, Alan; Mehler, Jacques; Nespor, Marina
Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.
Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin
Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…
Yip, Michael C. W.
The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…
Adesope, Olusola O.; Nesbit, John C.
An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…
This paper presents two chatbot systems, ALICE and. Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the. Corpus of Spoken Afrikaans (Korpus Gesproke Afrikaans) to retrain the ALICE chatbot system with human ...
Therefore, this paper examined vowel insertion in the spoken French of 50 Ijebu Undergraduate French Learners (IUFLs) in Selected Universities in South West of Nigeria. The data collection for this study was through tape-recording of participants' production of 30 sentences containing both French vowel and consonant ...
The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…
Canisius, S.V.M.; van den Bosch, A.; Decadt, B.; Hoste, V.; De Pauw, G.
We describe the development of a Dutch memory-based shallow parser. The availability of large treebanks for Dutch, such as the one provided by the Spoken Dutch Corpus, allows memory-based learners to be trained on examples of shallow parsing taken from the treebank, and act as a shallow parser after
Discusses comparative analysis of spoken and written versions of a narrative to demonstrate that features which have been identified as characterizing oral discourse are also found in written discourse and that the written short story combines syntactic complexity expected in writing with features which create involvement expected in speaking.…
van der Werff, Laurens Bastiaan
This thesis introduces a novel framework for the evaluation of Automatic Speech Recognition (ASR) transcripts in an Spoken Document Retrieval (SDR) context. The basic premise is that ASR transcripts must be evaluated by measuring the impact of noise in the transcripts on the search results of a
This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult
Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou
This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…
Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.
Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34, 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…
Weber, A.C.; Cutler, A.
Six eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target
Grant, Lynn E.
This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…
Tao, Hongyin; McCarthy, Michael J.
Reexamines the notion of non-restrictive relative clauses (NRRCs) in light of spoken corpus evidence, based on analysis of 692 occurrences of non-restrictive "which"-clauses in British and American spoken English data. Reviews traditional conceptions of NRRCs and recent work on the broader notion of subordination in spoken grammar.…
might exist between overall language proficiency, collocational competence and spoken fluency in non-native English-speaking university lecturers. The data came from 15 20-minute mini-lectures recorded between 2009 and 2011 for an English oral proficiency test for lecturers employed at the University......Despite the large body of research into formulaic language and fluency, there seems to be a lack of empirical evidence for how collocations, often considered a subset of formulaic language, might impact on fluency. To address this problem, this dissertation examined to what extent correlations...... fluency measures calculated for each lecturer. Initial findings across all lecturers showed no correlation between collocational competence and either overall proficiency or fluency. However, further analysis of lecturers by department revealed that possible correlations were hidden by variations...
Pragmatic assessment is usually complex, long and sophisticated, especially for professionals who lack specific linguistic education and interact with impaired speakers. To design a quick method of assessment that will provide a quick general evaluation of the pragmatic effectiveness of neurologically affected speakers. This first filter will allow us to decide whether a detailed analysis of the altered categories should follow. Our starting point was the PerLA (perception, language and aphasia) profile of pragmatic assessment designed for the comprehensive analysis of conversational data in clinical linguistics; this was then converted into a quick questionnaire. A quick protocol of pragmatic assessment is proposed and the results found in a group of children with attention deficit hyperactivity disorder are discussed.
Ganjavi, Shadi; Georgiou, Panayiotis G; Narayanan, Shrikanth
... (The DARPA Babylon Program; Narayanan, 2003). In this paper, we discuss transcription systems needed for automated spoken language processing applications in Persian that uses the Arabic script for writing...
Christine E Potter
Full Text Available Five- and six-year-old children (n=160 participated in three studies designed to explore language discrimination. After an initial exposure period (during which children heard either an unfamiliar language, a familiar language, or music, children performed an ABX discrimination task involving two unfamiliar languages that were either similar (Spanish vs. Italian or different (Spanish vs. Mandarin. On each trial, participants heard two sentences spoken by two individuals, each spoken in an unfamiliar language. The pair was followed by a third sentence spoken in one of the two languages. Participants were asked to judge whether the third sentence was spoken by the first speaker or the second speaker. Across studies, both the difficulty of the discrimination contrast and the relation between exposure and test materials affected children’s performance. In particular, language discrimination performance was facilitated by an initial exposure to a different unfamiliar language, suggesting that experience can help tune children’s attention to the relevant features of novel languages.
Full Text Available Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT, with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM, which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.