Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin
Vukotic , Vedran; Raymond , Christian; Gravier , Guillaume
International audience; Architectures of Recurrent Neural Networks (RNN) recently become a very popular choice for Spoken Language Understanding (SLU) problems; however, they represent a big family of different architectures that can furthermore be combined to form more complex neural networks. In this work, we compare different recurrent networks, such as simple Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Gated Memory Units (GRU) and their bidirectional versions,...
Moore, Robert C.; Cohen, Michael H.
Under this effort, SRI has developed spoken-language technology for interactive problem solving, featuring real-time performance for up to several thousand word vocabularies, high semantic accuracy, habitability within the domain, and robustness to many sources of variability. Although the technology is suitable for many applications, efforts to date have focused on developing an Air Travel Information System (ATIS) prototype application. SRI's ATIS system has been evaluated in four ARPA benchmark evaluations, and has consistently been at or near the top in performance. These achievements are the result of SRI's technical progress in speech recognition, natural-language processing, and speech and natural-language integration.
Crowe, Kathryn; McLeod, Sharynne
The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…
Moeller, Aleidine J.; Theiler, Janine
Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…
Pon-Barry, Heather Roberta
The ﬁeld of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, ...
This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…
Houston, K. Todd; Perigoe, Christina B.
Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…
Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.
Apr 12, 2018 ... languages and can be used for the purposes of spoken language identification. Keywords. SLID .... branch of linguistics to study the sound structure of human language. ... countries, work in the area of Indian language identification has not ...... English and speech database has been collected over tele-.
The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...
Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda
.... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...
Jelena Kuvač Kraljević
Full Text Available Interest in spoken-language corpora has increased over the past two decades leading to the development of new corpora and the discovery of new facets of spoken language. These types of corpora represent the most comprehensive data source about the language of ordinary speakers. Such corpora are based on spontaneous, unscripted speech defined by a variety of styles, registers and dialects. The aim of this paper is to present the Croatian Adult Spoken Language Corpus (HrAL, its structure and its possible applications in different linguistic subfields. HrAL was built by sampling spontaneous conversations among 617 speakers from all Croatian counties, and it comprises more than 250,000 tokens and more than 100,000 types. Data were collected during three time slots: from 2010 to 2012, from 2014 to 2015 and during 2016. HrAL is today available within TalkBank, a large database of spoken-language corpora covering different languages (https://talkbank.org, in the Conversational Analyses corpora within the subsection titled Conversational Banks. Data were transcribed, coded and segmented using the transcription format Codes for Human Analysis of Transcripts (CHAT and the Computerised Language Analysis (CLAN suite of programmes within the TalkBank toolkit. Speech streams were segmented into communication units (C-units based on syntactic criteria. Most transcripts were linked to their source audios. The TalkBank is public free, i.e. all data stored in it can be shared by the wider community in accordance with the basic rules of the TalkBank. HrAL provides information about spoken grammar and lexicon, discourse skills, error production and productivity in general. It may be useful for sociolinguistic research and studies of synchronic language changes in Croatian.
Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José
Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…
Curtiss, S; de Bode, S; Mathern, G W
We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.
Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae
Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.
Peterson, Nathaniel R.; Pisoni, David B.; Miyamoto, Richard T.
Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading...
This article re-examines the notion of spoken fluency. Fluent and fluency are terms commonly used in everyday, lay language, and fluency, or lack of it, has social consequences. The article reviews the main approaches to understanding and measuring spoken fluency and suggest that spoken fluency is best understood as an interactive achievement, and offers the metaphor of ‘confluence’ to replace the term fluency. Many measures of spoken fluency are internal and monologue-based, whereas evidence...
Jon-Ruben eVan Rhijn
Full Text Available Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms that encode these pathways will shed light on how humans can effortlessly and innately use spoken language and could elucidate what goes wrong in speech-language disorders.FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that also includes receptive and expressive language impairments. The underlying neuro-molecular mechanisms controlled by FOXP2, which will give insight into our capacity for speech-motor control, are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid signaling and to modify the cellular response to retinoic acid, a key regulator of brain development. Herein we explore the evidence that FOXP2 and retinoic acid signaling function in the same pathways. We present evidence at molecular, cellular and behavioral levels that suggest an interplay between FOXP2 and retinoic acid that may be important for fine motor control and speech-motor output. We propose that retinoic acid signaling is an exciting new angle from which to investigate how neurogenetic mechanisms can contribute to the (spoken language ready brain.
Gooskens, Charlotte; van Heuven, Vincent J.; van Bezooijen, Renee; Pacilly, Jos J. A.
The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss circumstantial evidence suggesting that Danish is
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language
Office of English Language Acquisition, US Department of Education, 2015
The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…
Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M
The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.
Barberà, Gemma; Zwets, Martine
In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…
Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…
Li, Bei; Soli, Sigfrid D; Zheng, Yun; Li, Gang; Meng, Zhaoli
The purpose of this study was to evaluate early spoken language development in young Mandarin-speaking children during the first 24 months after cochlear implantation, as measured by receptive and expressive vocabulary growth rates. Growth rates were compared with those of normally hearing children and with growth rates for English-speaking children with cochlear implants. Receptive and expressive vocabularies were measured with the simplified short form (SSF) version of the Mandarin Communicative Development Inventory (MCDI) in a sample of 112 pediatric implant recipients at baseline, 3, 6, 12, and 24 months after implantation. Implant ages ranged from 1 to 5 years. Scores were expressed in terms of normal equivalent ages, allowing normalized vocabulary growth rates to be determined. Scores for English-speaking children were re-expressed in these terms, allowing direct comparisons of Mandarin and English early spoken language development. Vocabulary growth rates during the first 12 months after implantation were similar to those for normally hearing children less than 16 months of age. Comparisons with growth rates for normally hearing children 16-30 months of age showed that the youngest implant age group (1-2 years) had an average growth rate of 0.68 that of normally hearing children; while the middle implant age group (2-3 years) had an average growth rate of 0.65; and the oldest implant age group (>3 years) had an average growth rate of 0.56, significantly less than the other two rates. Growth rates for English-speaking children with cochlear implants were 0.68 in the youngest group, 0.54 in the middle group, and 0.57 in the oldest group. Growth rates in the middle implant age groups for the two languages differed significantly. The SSF version of the MCDI is suitable for assessment of Mandarin language development during the first 24 months after cochlear implantation. Effects of implant age and duration of implantation can be compared directly across
Moats, L C
Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.
Matluck, Joseph H.; Mace-Matluck, Betty J.
This paper discusses the sociolinguistic problems inherent in multilingual testing, and the accompanying dangers of cultural bias in either the visuals or the language used in a given test. The first section discusses English-speaking Americans' perception of foreign speakers in terms of: (1) physical features; (2) speech, specifically vocabulary,…
As with biological systems, spoken languages are strikingly robust against perturbations. This paper shows that languages achieve robustness in a way that is highly similar to many biological systems. For example, speech sounds are encoded via multiple acoustically diverse, temporally distributed and functionally redundant cues, characteristics that bear similarities to what biologists call "degeneracy". Speech is furthermore adequately characterized by neutrality, with many different tongue configurations leading to similar acoustic outputs, and different acoustic variants understood as the same by recipients. This highlights the presence of a large neutral network of acoustic neighbors for every speech sound. Such neutrality ensures that a steady backdrop of variation can be maintained without impeding communication, assuring that there is "fodder" for subsequent evolution. Thus, studying linguistic robustness is not only important for understanding how linguistic systems maintain their functioning upon the background of noise, but also for understanding the preconditions for language evolution. © 2014 WILEY Periodicals, Inc.
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.
BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.
Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken
Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung
Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.
Salverda, Anne Pier; Altmann, Gerry T. M.
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
Laveson, J. I.; Silver, C. A.
Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.
Full Text Available Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to hook up to motor and perceptual experience.
Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid
regression model accounted for 52% of the variance in receptive language scores and 58% of the variance in expressive language scores. On the basis of language test scores of this large group of children, an LQ of 0.60 or lower was considered a risk criterion for problematic language development compared with other deaf children using CIs. Children attaining LQs below 0.60 should be monitored more closely and perhaps their rehabilitation programs should be reconsidered. Improved language outcomes were related to implantation under the age of two, contralateral stimulation, monolingualism, sufficient involvement of the parents, and oral communication by the parents. The presence of an additional learning disability had a negative influence on language development. Understanding these causes of variation can help clinicians and parents to create the best possible circumstances for children with CIs to acquire language.
Language understanding is essential for intelligent information processing. Processing of language itself involves configuration element analysis, syntactic analysis (parsing), and semantic analysis. They are not carried out in isolation. These are described for the Japanese language and their usage in understanding-systems is examined. 30 references.
Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…
Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…
Diao, Yali; Chandler, Paul; Sweller, John
Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.
To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently
Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.
An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…
de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo
Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Guo, Xuan; Yu, Qi; Alm, Cecilia Ovesdotter; Calvelli, Cara; Pelz, Jeff B; Shi, Pengcheng; Haake, Anne R
Extracting useful visual clues from medical images allowing accurate diagnoses requires physicians' domain knowledge acquired through years of systematic study and clinical training. This is especially true in the dermatology domain, a medical specialty that requires physicians to have image inspection experience. Automating or at least aiding such efforts requires understanding physicians' reasoning processes and their use of domain knowledge. Mining physicians' references to medical concepts in narratives during image-based diagnosis of a disease is an interesting research topic that can help reveal experts' reasoning processes. It can also be a useful resource to assist with design of information technologies for image use and for image case-based medical education systems. We collected data for analyzing physicians' diagnostic reasoning processes by conducting an experiment that recorded their spoken descriptions during inspection of dermatology images. In this paper we focus on the benefit of physicians' spoken descriptions and provide a general workflow for mining medical domain knowledge based on linguistic data from these narratives. The challenge of a medical image case can influence the accuracy of the diagnosis as well as how physicians pursue the diagnostic process. Accordingly, we define two lexical metrics for physicians' narratives--lexical consensus score and top N relatedness score--and evaluate their usefulness by assessing the diagnostic challenge levels of corresponding medical images. We also report on clustering medical images based on anchor concepts obtained from physicians' medical term usage. These analyses are based on physicians' spoken narratives that have been preprocessed by incorporating the Unified Medical Language System for detecting medical concepts. The image rankings based on lexical consensus score and on top 1 relatedness score are well correlated with those based on challenge levels (Spearman correlation>0.5 and Kendall
Peterson, Nathaniel R; Pisoni, David B; Miyamoto, Richard T
Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading). However, there is wide variation in individual outcomes following cochlear implantation, and some CI recipients never develop useable speech and oral language skills. The causes of this enormous variation in outcomes are only partly understood at the present time. The variables most strongly associated with language outcomes are age at implantation and mode of communication in rehabilitation. Thus, some of the more important factors determining success of cochlear implantation are broadly related to neural plasticity that appears to be transiently present in deaf individuals. In this article we review the expected outcomes of cochlear implantation, potential predictors of those outcomes, the basic science regarding critical and sensitive periods, and several new research directions in the field of cochlear implantation.
Huettig, Falk; Brouwer, Susanne
It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.
Full Text Available Spoken text differs from written one in its features of context dependency, turn-taking organization, and dynamic structure. EFL learners; however, sometime find it difficult to produce typical characteristics of spoken language, particularly in casual talk. When they are asked to conduct a conversation, some of them tend to be script-based which is considered unnatural. Using the theory of Thornburry (2005, this paper aims to analyze characteristics of spoken language in casual conversation which cover spontaneity, interactivity, interpersonality, and coherence. This study used discourse analysis to reveal four features in turns and moves of three casual conversations. The findings indicate that not all sub-features used in the conversation. In this case, the spontaneity features were used 132 times; the interactivity features were used 1081 times; the interpersonality features were used 257 times; while the coherence features (negotiation features were used 526 times. Besides, the results also present that some participants seem to dominantly produce some sub-features naturally and vice versa. Therefore, this finding is expected to be beneficial to provide a model of how spoken interaction should be carried out. More importantly, it could raise English teachers or lecturers‘ awareness in teaching features of spoken language, so that, the students could develop their communicative competence as the native speakers of English do.
Hampton, L. H.; Kaiser, A. P.
Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…
Mani, Nivedita; Huettig, Falk
Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.
Tur-Kaspa, Hana; Dromi, Esther
The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.
Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.
Zimmer, Patricia Moore
Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…
Evans, Julia L; Gillam, Ronald B; Montgomery, James W
This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.
Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa
One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…
von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R
To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.
Nicholas, Johanna G.; Geers, Ann E.
Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…
Singh, Jitendra K.; Misra, Girishwar; De Raad, Boele
The psycho-lexical approach is extended to Hindi, a major language spoken in India. From both the dictionary and from Hindi novels, a huge set of personality descriptors was put together, ultimately reduced to a manageable set of 295 trait terms. Both self and peer ratings were collected on those
McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard
Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…
Full Text Available The authors investigate the addition of a new language, for which limited resources are available, to a phonotactic language identification system. Two classes of approaches are studied: in the first class, only existing phonetic recognizers...
Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre
To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.
Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity
Jul 1, 2009 ... and cultural metaphors of illness as part of language learning. The theory of .... role.21 Even in a military setting, where soldiers learnt Korean or Spanish as part of ... own language – a cross-cultural survey. Brit J Gen Pract ...
Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters
Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to
Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.
Hulme, Charles; Snowling, Margaret J
We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter-sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills).
le Fevre Jakobsen, Bjarne
The tempo of Danish television news broadcasts has changed markedly over the past 40 years, while the language has essentially always been conservative, and remains so today. The development in the tempo of the broadcasts has gone through a number of phases from a newsreader in a rigid structure...
In the face of globalisation, the scale of communication is increasing from being merely .... capital goods and services across national frontiers involving too, political contexts of ... auditory and audiovisual entertainment, the use of English dominates. The language .... manners, entertainment, sports, the legal system, etc.
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
Full Text Available Spoken English may sometimes cause us to face a peculiar problem in respect of the reception and the decoding of auditive signals, which might lead to mishearings. Risen from erroneous perception, from a lack in understanding the communication and an involuntary mental replacement of a certain element or structure by a more familiar one, these mistakes are most frequently encountered in the case of listening to songs, where the melodic line can facilitate the development of confusion by its somewhat altered intonation, which produces the so called mondegreens. Still, instances can be met in all domains of verbal communication, as proven in several examples noticed during classes of English as a foreign language (EFL taught to non-philological subjects. Production and perceptions of language depend on a series of elements that influence the encoding and the decoding of the message. These filters belong to both psychological and semantic categories which can either interfere with the accuracy of emission and reception. Poor understanding of a notion or concept combined with a more familiar relation with a similarly sounding one will result in unconsciously picking the structure which is better known. This means ‘hearing’ something else than it had been said, something closer to the receiver’s preoccupations and baggage of knowledge than the original structure or word. Some mishearings become particularly relevant as they concern teaching English for Specific Purposes (ESP. Such are those encountered during classes of Business English or in English for Law. Though not very likely to occur too often, given an intuitively felt inaccuracy - as the terms are known by the users to need to be more specialised -, such examples are still not ignorable. Thus, we consider they deserve a higher degree of attention, as they might become quite relevant in the global context of an increasing work force migration and a spread of multinational companies.
Kasyidi, Fatan; Puji Lestari, Dessi
One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.
Williams, Joshua T.; Newman, Sharlene D.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…
This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…
Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto
An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.
Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony
Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.
Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J
This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.
Hirschmüller, Sarah; Egloff, Boris
How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.
Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G
This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Lenti Boero, Daniela
Building a theory on extant species, as Ackermann et al. do, is a useful contribution to the field of language evolution. Here, I add another living model that might be of interest: human language ontogeny in the first year of life. A better knowledge of this phase might help in understanding two more topics among the "several building blocks of a comprehensive theory of the evolution of spoken language" indicated in their conclusion by Ackermann et al., that is, the foundation of the co-evolution of linguistic motor skills with the auditory skills underlying speech perception, and the possible phylogenetic interactions of protospeech production with referential capabilities.
Willems, Roel M; Casasanto, Daniel
Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language.
Roel M Willems
Full Text Available Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language.
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory
Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.
Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias
Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.
Nicholas, Johanna Grant; Geers, Ann E
By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44
Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie
Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…
Choroomi, S; Curotta, J
To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.
Full Text Available Wittgenstein has often explored language games that have to do with musical objects of different sizes (phrases, themes, formal sections or entire works. These games can refer to a technical language or to common parlance and correspond to different targets. One of these coincides with the intention to suggest a way of conceiving musical understanding. His model takes the form of the invitation to "hear (something as (something": typically, to hear a musical passage as an introduction or as a conclusion or in a certain tonality. However one may ask to what extent or in what terms (literal or metaphorical these procedures, and usually the intervention of language games, is requested by our common ways of understanding music. This article shows through the use of some examples that aspectual perception inherent to musical understanding does not require language games as a necessary condition (although in many cases the link between them seems very strong, in contradiction with the thesis of an essential linguistic character of music. At a basic level, it seems more appropriate to insist on the notion of a game: to understand music means to enter into the orbit of "music games" which show an autonomous functioning. Language games have, however, an important function when we develop this comprehension in the light of the criteria of judgment that substantiate the manner in which music is incorporated in and operates within specific forms of life.
Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih
It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.
Teachers' understanding of the communicative language teaching approach: The case of English language teachers in Thohoyandou. ... with CLT theories and practice. Keywords: communicative competence, approach versus method, Grammar translation method, direct method, first additional language, second language ...
Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B
Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Werfel, Krystal L.
Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…
Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners
Leni Amalia Suek
Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.
Petkov, Christopher I; Jarvis, Erich D
Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.
Full Text Available , and complicates the design of the system as a whole. Current benchmark results are established by the National Institute of Standards and Technology (NIST) Language Recognition Evaluation (LRE) . Initially started in 1996, the next evaluation was in 2003..., Gunnar Evermann, Mark Gales, Thomas Hain, Dan Kershaw, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, Valtcho Valtchev, and Phil Woodland: “The HTK book. Revised for HTK version 3.3”, Online: http://htk.eng.cam.ac.uk/., 2005.  M.A. Zissman...
Sergio Di Carlo
Full Text Available Over time, definitions and taxonomies of language learning strategies have been critically examined. This article defines and classifies cognitive language learning strategies on a more grounded basis. Language learning is a macro-process for which the general hypotheses of information processing are valid. Cognitive strategies are represented by the pillars underlying the encoding, storage and retrieval of information. In order to understand the processes taking place on these three dimensions, a functional model was elaborated from multiple theoretical contributions and previous models: the Smart Processing Model. This model operates with linguistic inputs as well as with any other kind of information. It helps to illustrate the stages, relations, modules and processes that occur during the flow of information. This theoretical advance is a core element to classify cognitive strategies. Contributions from cognitive neuroscience have also been considered to establish the proposed classification which consists of five categories. Each of these categories has a different predominant function: classification, preparation, association, elaboration and transfer-practice. This better founded taxonomy opens the doors to potential studies that would allow a better understanding of the interdisciplinary complexity of language learning. Pedagogical and methodological implications are also discussed.
Adank, P.M.; Noordzij, M.L.; Hagoort, P.
A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a
D'Mello, Sidney K; Dowell, Nia; Graesser, Arthur
There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.
Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar
The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (Paudiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…
Montgomery, James W.; Polunenko, Anzhela; Marinellie, Sally A.
The role of phonological short-term memory (PSTM), attentional resource capacity/allocation, and processing speed on children's spoken narrative comprehension was investigated. Sixty-seven children (6-11 years) completed a digit span task (PSTM), concurrent verbal processing and storage (CPS) task (resource capacity/allocation), auditory-visual…
Adank, P.M.; Noordzij, M.L.; Hagoort, P.
A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and
Olesen, Henning Salling; Weber, Kirsten
resumida brevemente, enfatizando el rol de los argumentos del investigador en descubrir el significado socialmente inconsciente en la interacción social. Finalmente, una mirada a los problemas epistemológicos contemporáneos. El enfoque de LORENZER para teorizar e investigar al sujeto como una entidad......The article is a guided tour to Alfred LORENZER's proposal for an "in-depth hermeneutic" cultural analysis methodology which was launched in an environment with an almost complete split between social sciences and psychology/psychoanalysis. It presents the background in his materialist...... socialization theory, which combines a social reinterpretation of the core insights in classical psychoanalysis—the unconscious, the drives—with a theory of language acquisition. His methodology is based on a transformation of the "scenic understanding" from a clinical to a text interpretation, which seeks...
Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.
The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319
Levey, Sandra; Polirstok, Susan
Language Development: Understanding Language Diversity in the Classroom offers comprehensive coverage of the language development process for pre- and in-service teachers while emphasizing the factors that further academic success in the classroom, including literacy skills, phonological awareness, and narrative. With chapters written by respected…
Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.
Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides
De Angelis, Gessica
The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…
Li, Xiao-qing; Ren, Gui-qin
An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…
Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.
This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…
Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.
Gautreau, Aurore; Hoen, Michel; Meunier, Fanny
This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.
Full Text Available ABSTRACT: The goal of English Language Teaching is communicative competence. To reach this goal students should be supplied with good model texts. These texts should consider the appropriacy of language use. By analyzing the context of situation which is focused on tenor the meanings constructed to build the relationships among the interactants in spoken texts can be unfolded. This study aims at investigating the interpersonal relations (tenor of the interactants in the conversation texts as well as the appropriacy of their realization in the given contexts. The study was conducted under discourse analysis by applying a descriptive qualitative method. There were eight conversation texts which function as examples in five chapters of a textbook. The data were analyzed by using lexicogrammatical analysis, described, and interpreted contextually. Then, the realization of the tenor of the texts was further analyzed in terms of appropriacy to suggest improvement. The results of the study show that the tenor indicates relationships between friend-friend, student-student, questioners-respondents, mother-son, and teacher-student; the power is equal and unequal; the social distances show frequent contact, relatively frequent contact, relatively low contact, high and low affective involvement, using informal, relatively informal, relatively formal, and formal language. There are also some indications of inappropriacy of tenor realization in all texts. It should be improved in the use of degree of formality, the realization of societal roles, status, and affective involvement. Keywords: context of situation, tenor, appropriacy.
Caminha, Guilherme Pilla; Melo Junior, José Tavares de; Hopkins, Claire; Pizzichini, Emilio; Pizzichini, Marcia Margaret Menezes
Rhinosinusitis is a highly prevalent disease and a major cause of high medical costs. It has been proven to have an impact on the quality of life through generic health-related quality of life assessments. However, generic instruments may not be able to factor in the effects of interventions and treatments. SNOT-22 is a major disease-specific instrument to assess quality of life for patients with rhinosinusitis. Nevertheless, there is still no validated SNOT-22 version in our country. Cross-cultural adaptation of the SNOT-22 into Brazilian Portuguese and assessment of its psychometric properties. The Brazilian version of the SNOT-22 was developed according to international guidelines and was broken down into nine stages: 1) Preparation 2) Translation 3) Reconciliation 4) Back-translation 5) Comparison 6) Evaluation by the author of the SNOT-22 7) Revision by committee of experts 8) Cognitive debriefing 9) Final version. Second phase: prospective study consisting of a verification of the psychometric properties, by analyzing internal consistency and test-retest reliability. Cultural adaptation showed adequate understanding, acceptability and psychometric properties. We followed the recommended steps for the cultural adaptation of the SNOT-22 into Portuguese language, producing a tool for the assessment of patients with sinonasal disorders of clinical importance and for scientific studies.
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits for fiscal...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from fiscal...
Rämä, Pia; Sirri, Louah; Serres, Josette
Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.
Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…
Full Text Available The European ‘Lifelong Learning Programme’ (LLP) project ‘Games Online for Basic Language learning’ (GOBL) aimed to provide youths and adults wishing to improve their basic language skills access to materials for the development of communicative...
Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán
Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…
Olesen, Henning Salling; Weber, Kirsten
is based on a transformation of the "scenic understanding" from a clinical to a text interpretation, which seeks to understand collective unconscious meaning in text, and is presented with an illustration of the interpretation procedure from social research. Then follows a brief systematic account of key...
Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R
This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant
Harris, David; Bennet, Lisa; Bant, Sharyn
Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of
Sarant, Julia; Harris, David; Bennet, Lisa; Bant, Sharyn
Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children's intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of screen time, and more time spent
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from federal...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits from federal fiscal year 2011...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits for fiscal years 2014 -...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for fiscal years 2014...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...
Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth
Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to
Francis, Alexander L; Ho, Diana Wai Lam
There have been only two reports of multilingual cochlear implant users to date, and both of these were postlingually deafened adults. Here we report the case of a 6-year-old early-deafened child who is acquiring Cantonese, English and Mandarin in Hong Kong. He and two age-matched peers with similar educational backgrounds were tested using common, standardized tests of vocabulary and expressive and receptive language skills (Peabody Picture Vocabulary Test (Revised) and Reynell Developmental Language Scales version II). Results show that this child is acquiring Cantonese, English and Mandarin to a degree comparable to two classmates with normal hearing and similar educational and social backgrounds.
Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa
Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…
Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…
Werfel, Krystal L
The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.
Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.
This book discusses the following: Computational Linguistics, Artificial Intelligence, Linguistics, Philosophy, and Cognitive Science and the current state of natural language understanding. Three topics form the focus for discussion; these topics include aspects of grammars, aspects of semantics/pragmatics, and knowledge representation.
Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M
A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.
Colin, C; Zuinen, T; Bayard, C; Leybaert, J
Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung
This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.
Schreibman, Laura; Stahmer, Aubyn C
Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.
The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and phonologically similar version (PS). The result showed that for immediate free-recall, the performances were better for the SL and the PS conditions compared to the SA one. However, for the parts of delayed recall and recognition, the results did not reveal any significant consistent effect of diglossia. Accordingly, it was suggested that diglossia has a significant effect on the storage and short term memory functions but not on long term memory functions. The results were discussed in light of different approaches in the field of bilingual memory.
Martin, James H
.... This approach asserts that the interpretation of conventional metaphoric language should proceed through the direct application of specific knowledge about the metaphors in the language. MIDAS...
Petitto, Laura Ann; Holowka, Siobhan
Examines whether early simultaneous bilingual language exposure causes children to be language delayed or confused. Cites research suggesting normal and parallel linguistic development occurs in each language in young children and young children's dual language developments are similar to monolingual language acquisition. Research on simultaneous…
Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.
Waltz, D. L.; Maran, L. R.; Dorfman, M. H.; Dinitz, R.; Farwell, D.
During this contract period the authors have: (1) continued investigation of events and actions by means of representation schemes called 'event shape diagrams'; (2) written a parsing program which selects appropriate word and sentence meanings by a parallel process know as activation and inhibition; (3) begun investigation of the point of a story or event by modeling the motivations and emotional behaviors of story characters; (4) started work on combining and translating two machine-readable dictionaries into a lexicon and knowledge base which will form an integral part of our natural language understanding programs; (5) made substantial progress toward a general model for the representation of cognitive relations by comparing English scene and event descriptions with similar descriptions in other languages; (6) constructed a general model for the representation of tense and aspect of verbs; (7) made progress toward the design of an integrated robotics system which accepts English requests, and uses visual and tactile inputs in making decisions and learning new tasks.
Santos-Sacchi, Joseph; Allen, Jont B.; Dorman, Michael; Bergeson-Dana, Tonya R.
These are the proceedings of 2012 AG Bell Research Symposium, presented July 1, 2012, as part of the AG Bell 2012 Convention. The session was moderated by Tamala S. Bradham, Ph.D., CCC-A. The papers presented at the proceedings are the following: (1) The Queens of Audition; (2) Speech Perception and Hearing Loss; (3) The Restoration of Speech…
Language change is a phenomenon that has fascinated scholars for centuries. As a science, linguistic theory has evolved considerably during the 20th century, but the overall puzzle of language change still remains unsolved...
This volume aims to bridge the gap between language arts teaching and linguistic theory. Part one discusses selected aspects of linguistics that are relevant to language arts teaching: the acquisition and development of language during childhood; the English sound system and its relation to spellings and meanings; traditional, structural, and…
Beckage, Nicole M.; Colunga, Eliana
Language is inherently cognitive and distinctly human. Separating the object of language from the human mind that processes and creates language fails to capture the full language system. Linguistics traditionally has focused on the study of language as a static representation, removed from the human mind. Network analysis has traditionally been focused on the properties and structure that emerge from network representations. Both disciplines could gain from looking at language as a cognitive process. In contrast, psycholinguistic research has focused on the process of language without committing to a representation. However, by considering language networks as approximations of the cognitive system we can take the strength of each of these approaches to study human performance and cognition as related to language. This paper reviews research showcasing the contributions of network science to the study of language. Specifically, we focus on the interplay of cognition and language as captured by a network representation. To this end, we review different types of language network representations before considering the influence of global level network features. We continue by considering human performance in relation to network structure and conclude with theoretical network models that offer potential and testable explanations of cognitive and linguistic phenomena.
Sedgwick, Carole; Garner, Mark
Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical
Percy-Smith, L; Busch, GW; Sandahl, M
The aim of the study was to identify factors associated with the level of language understanding, the level of receptive and active vocabulary, and to estimate effect-related odds ratios for cochlear implanted children's language level.......The aim of the study was to identify factors associated with the level of language understanding, the level of receptive and active vocabulary, and to estimate effect-related odds ratios for cochlear implanted children's language level....
Tomblin, J. Bruce; Mueller, Kathyrn L.
This article provides a background for the topic of comorbidity of attention-deficit/hyperactivity disorder and spoken and written language and speech disorders that extends through this issue of "Topics in Language Disorders." Comorbidity is common within developmental disorders and may be explained by many possible reasons. Some of these can be…
Wilder Yesid Escobar
Full Text Available Recognizing that developing the competences needed to appropriately use linguistic resources according to contextual characteristics (pragmatics is as important as the cultural-imbedded linguistic knowledge itself (semantics and that both are equally essential to form competent speakers of English in foreign language contexts, we feel this research relies on corpus linguistics to analyze both the scope and the limitations of the sociolinguistic knowledge and the communicative skills of English students at the university level. To such end, a linguistic corpus was assembled, compared to an existing corpus of native speakers, and analyzed in terms of the frequency, overuse, underuse, misuse, ambiguity, success, and failure of the linguistic parameters used in speech acts. The findings herein describe the linguistic configurations employed to modify levels and degrees of descriptions (salient sematic theme exhibited in the EFL learners´ corpus appealing to the sociolinguistic principles governing meaning making and language use which are constructed under the social conditions of the environments where the language is naturally spoken for sociocultural exchange.
Kearsey, John; Turner, Sheila
Argues that, although some bilingual pupils may be at a disadvantage in understanding scientific language, there may be some circumstances where being bilingual is an advantage in understanding scientific language. Presents evidence of circumstances where being bilingual was an advantage and circumstances where it was a disadvantage in…
Different generations are constituted depending on social changes and they are designed sociologically as traditional, baby boomer, X, Y and Z. Many studies have been reported on understanding of foreign language learning generation Y. This study aims to realise the gap in and contribute to the research on language learning understanding of…
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits for fiscal...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from fiscal...
Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…
With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.
In this article we argue that second language acquisition (SLA) research and theory have a significant role to play in teacher education, especially at the masters level. The danger of overly practical approaches is that they cannot challenge current practice in ways that are both critical and rigorous. However, to engage ...
Van Lancker Sidtis, Diana
Although interest in the language sciences was previously focused on newly created sentences, more recently much attention has turned to the importance of formulaic expressions in normal and disordered communication. Also referred to as formulaic expressions and made up of speech formulas, idioms, expletives, serial and memorized speech, slang, sayings, clichés, and conventional expressions, non-propositional language forms a large proportion of every speaker's competence, and may be differentially disturbed in neurological disorders. This review aims to examine non-propositional speech with respect to linguistic descriptions, psycholinguistic experiments, sociolinguistic studies, child language development, clinical language disorders, and neurological studies. Evidence from numerous sources reveals differentiated and specialized roles for novel and formulaic verbal functions, and suggests that generation of novel sentences and management of prefabricated expressions represent two legitimate and separable processes in language behaviour. A preliminary model of language behaviour that encompasses unitary and compositional properties and their integration in everyday language use is proposed. Integration and synchronizing of two disparate processes in language behaviour, formulaic and novel, characterizes normal communicative function and contributes to creativity in language. This dichotomy is supported by studies arising from other disciplines in neurology and psychology. Further studies are necessary to determine in what ways the various categories of formulaic expressions are related, and how these categories are processed by the brain. Better understanding of how non-propositional categories of speech are stored and processed in the brain can lead to better informed treatment strategies in language disorders.
Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun
The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.
Chen, Pei-Hua; Liu, Ting-Wei
Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…
Cupples, Linda; Ching, Teresa Yc; Button, Laura; Seeto, Mark; Zhang, Vicky; Whitfield, Jessica; Gunnourie, Miriam; Martin, Louise; Marnane, Vivienne
This study investigated the factors influencing 5-year language, speech and everyday functioning of children with congenital hearing loss. Standardised tests including PLS-4, PPVT-4 and DEAP were directly administered to children. Parent reports on language (CDI) and everyday functioning (PEACH) were collected. Regression analyses were conducted to examine the influence of a range of demographic variables on outcomes. Participants were 339 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children's average receptive and expressive language scores were approximately 1 SD below the mean of typically developing children, and scores on speech production and everyday functioning were more than 1 SD below. Regression models accounted for 70-23% of variance in scores across different tests. Earlier CI switch-on and higher non-verbal ability were associated with better outcomes in most domains. Earlier HA fitting and use of oral communication were associated with better outcomes on directly administered language assessments. Severity of hearing loss and maternal education influenced outcomes of children with HAs. The presence of additional disabilities affected outcomes of children with CIs. The findings provide strong evidence for the benefits of early HA fitting and early CI for improving children's outcomes.
Manrique Cordeje, M.E.
How does (mis)understanding works in conversation? Problems of understanding occur all the time in our everyday social life. How does miscommunication happen and how do we deal with it? This thesis reports on how sign language users manage to understand each other based on a large Conversational
Kowal, Sabine; O'Connell, Daniel C
The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.
Šimáčková, Š.; Podlipský, V.J.; Chládková, K.
As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,
Full Text Available Learning idioms which is considered a very essential part of learning and using language (Sridhar and Karunakaran, 2013 has recently attracted a great attention of English learning researchers particularly the assessment of how well Asian language learners acquire and use idioms in communication (Tran, 2013. Understanding and using them fluently could be viewed as a sign towards language proficiency as they could be an effective way to give students better conditions to enhance their communication skills in the daily context (Beloussova, 2015. Investigating how idiomatic expressions are dealt with and processed in a second language or foreign language is an issue worth examining further since it may give language teachers a better idea of some of the strategies language learners use in order to interpret figurative language. Despite their importance, learning and using English idioms by Arab EFL learners have not been investigated extensively, and no research has been conducted on Jordanian students’ idiomatic competency. Thus, the researcher decided to work on these un-tackled issues in the Jordanian context. Most idioms-based investigations are the difficulties Jordanians learners of English face when translating them into Arabic (Hussein, Khanji, and Makhzoumi, 2000; Bataineh and Bataineh, 2002; Alrishan and Smadi, 2015. The analysis of the test showed students’ very poor idiomatic competence; particularly a very limited awareness of the most frequently used idioms despite their overwhelming desire to learn them. Data analysis of the questionnaire revealed the strategies students use and the problems they face in understanding and learning idioms.
Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina
This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…
Schaub, Gayle; Cadena, Cara; Bravender, Patricia; Kierkus, Christopher
To effectively access and use the resources of the academic library and to become information-literate, students must understand the language of information literacy. This study analyzes undergraduate students' understanding of fourteen commonly used information-literacy terms. It was found that some of the terms least understood by students are…
Nataša Pirih Svetina
Full Text Available Intercomprehension is a communication practice where two persons speak their mother tongue and are able to understand each other without being taught the language of their adressee. It is a usual practice between languages that belong to the same linguistic family, for example Slavic, Romance or Germanic languages. In the article, the authors present the notion of intercomprehension as an alternative to communication in English as a lingua franca. That kind of communication was known among Scandinavians, whereas the first teaching method was developped for Romance languages (EuRomCom at the beginning of the 21st century. Today, more methods exist including German and Slavic languages. In the article, the authors are enumerating some of them and also give a short outline of existing practices.
Full Text Available [First paragraph] Christopher Alexander's book, The Timeless Way of Building, is probably the most beautiful book on the notion of quality in observation and design that I have been reading since Robert Pirsig's (1974 Zen and the Art of Motorcycle Maintenance. It was published in 1979, when Alexander was a professor of architecture at the University of California, Berkeley, where I was at that time studying. Although I was aware of some of Alexander's famous articles such as "A city is not a tree" (Alexander, 1965, the book (Alexander, 1979 never quite made it to the top of my reading list. This remained so until recently, when I met a software developer who enthusiastically talked to me on a book he was currently reading, about the importance of understanding design patterns. He was talking about the very book I had failed to read during my Berkeley years and which, as I now discovered, has since become a cult book among computer programmers and information scientists, as well as in other fields of research. I decided it was time to read the book.
The word ''radioactivity'' has something scary about it; it makes us think of something intangable, creeping dangers, the mysterious ticking of Geiger counters, reactor disasters, dirty bombs, nuclear contamination and destruction. True: Whole landscapes were made uninhabitable by accidents involving radioactive material such as Windscale, Sellafield and Chernobyl and others that were kept largely secret from the public. While to some they brought premature death, for the great majority of the world population their effects have so far been insignificant. By contrast, how little known is the fact that natural radioactivity has been around since human beginnings and that the cells of the human body have always been equipped to repair damage from radioactive radiation or other causes provided such damage does not occur too frequently. Elmar Traebert presents the physics underlying radioactivity without resorting to formulas and explains in an easily understandable manner the different types of radiation, their measurement and sources (in medicine, power plants, and weapons technology) and how they should be handled. He describes nuclear power plants and the safety problems they involve, sunburn, radiation therapy, uranium ammunition and uranium mining. Whoever knows about these things can more early cope with his own fears and maybe allay some of them. He can also see through statements made by different interest groups with regard to radioactive material and duly form his own opinion
Massaro, Dominic W
I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.
LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI
Süleyman Kaan YALÇIN (M.A.H.
Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.
Pandolfe, Jessica M.; Wittke, Kacie; Spaulding, Tammie J.
Purpose: This study examined if adolescents with specific language impairment (SLI) understand driving vocabulary as well as their typically developing (TD) peers. Method: A total of 16 adolescents with SLI and 16 TD comparison adolescents completed a receptive vocabulary task focused on driving terminology derived from statewide driver's manuals.…
Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua
Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…
Conner, Peggy S.
A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…
Loukina, Anastassia; Buzick, Heather
This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…
Greenspan, Stanley I.
It is very important to determine if a bilingual child's language delay is simply in English or also in the child's native language. Understandably, many children have higher levels of language development in the language spoken at home. To discover if this is the case, observe the child talking with his parents. Sometimes, even without…
Rogalsky, Corianne; Raphel, Kristin; Tomkovicz, Vivian; O'Grady, Lucinda; Damasio, Hanna; Bellugi, Ursula; Hickok, Gregory
The neural basis of action understanding is a hotly debated issue. The mirror neuron account holds that motor simulation in fronto-parietal circuits is critical to action understanding including speech comprehension, while others emphasize the ventral stream in the temporal lobe. Evidence from speech strongly supports the ventral stream account, but on the other hand, evidence from manual gesture comprehension (e.g., in limb apraxia) has led to contradictory findings. Here we present a lesion analysis of sign language comprehension. Sign language is an excellent model for studying mirror system function in that it bridges the gap between the visual-manual system in which mirror neurons are best characterized and language systems which have represented a theoretical target of mirror neuron research. Twenty-one life long deaf signers with focal cortical lesions performed two tasks: one involving the comprehension of individual signs and the other involving comprehension of signed sentences (commands). Participants' lesions, as indicated on MRI or CT scans, were mapped onto a template brain to explore the relationship between lesion location and sign comprehension measures. Single sign comprehension was not significantly affected by left hemisphere damage. Sentence sign comprehension impairments were associated with left temporal-parietal damage. We found that damage to mirror system related regions in the left frontal lobe were not associated with deficits on either of these comprehension tasks. We conclude that the mirror system is not critically involved in action understanding.
Currently, the concept of spoken grammar has been mentioned among Chinese teachers. However, teach-ers in China still have a vague idea of spoken grammar. Therefore this dissertation examines what spoken grammar is and argues that native speakers’ model of spoken grammar needs to be highlighted in the classroom teaching.
Holmer, Emil; Heimann, Mikael; Rudner, Mary
Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into
Full Text Available Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU model (Rönnberg et al., 2013 pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL than unfamiliar British Sign Language (BSL signs, and that both groups would be better at imitating lexical signs (SSL and BSL than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1 we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2. Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at the T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills
Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica
When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…
Qu, Qingqing; Damian, Markus F
Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.
de Boer, Bart; Gontier, N; VanBendegem, JP; Aerts, D
This paper describes the uses of computer models in studying the evolution of language. Language is a complex dynamic system that can be studied at the level of the individual and at the level of the population. Much of the dynamics of language evolution and language change occur because of the
Wiseheart, Rebecca; Altmann, Lori J. P.
Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…
Full Text Available Computer Science 81 ( 2016 ) 128 – 135 5th Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...
This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the
Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.
Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G
Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Taumoepeau, Mele; Ruffman, Ted
This study assessed the relation between mother mental state language and child desire language and emotion understanding in 15--24-month-olds. At both times point, mothers described pictures to their infants and mother talk was coded for mental and nonmental state language. Children were administered 2 emotion understanding tasks and their mental…
Scott, Jessica; Hinton, Christina
The rise of globalisation makes language competencies more valuable, both at individual and societal levels. This book examines the links between globalisation and the way we teach and learn languages. It begins by asking why some individuals are more successful than others at learning non-native languages, and why some education systems, or countries, are more successful than others at teaching languages. The book comprises chapters by different authors on the subject of language learning. There are chapters on the role of motivation; the way that languages, cultures and identities are interc
Rossing, Niels Nygaard; Skrubbeltrang, Lotte Stausgaard
levels (Schein, 2004) in which each player and his actions can be considered an artefact - a concrete symbol in motion embedded in espoused values and basic assumptions. Therefore, the actions of each dialect are strongly connected to the underlying understanding of football. By document and video......The language of football: A cultural analysis of selected World Cup nations. This essay describes how actions on the football field relate to the nations’ different cultural understanding of football and how these actions become spoken dialects within a language of football. Saussure reasoned...... language to have two components: a language system and language users (Danesi, 2003). Consequently, football can be characterized as a language containing a system with specific rules of the game and users with actual choices and actions within the game. All football players can be considered language...
Simmons, Noreen; Johnston, Judith
Background: Speech-language pathologists often advise families about interaction patterns that will facilitate language learning. This advice is typically based on research with North American families of European heritage and may not be culturally suited for non-Western families. Aims: The goal of the project was to identify differences in the…
Pfau, R.; Steinbach, M.; Woll, B.
Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of
The understanding of language competition helps us to predict extinction and survival of languages spoken by minorities. A simple agent-based model of a sexual population, based on the Penna model, is built in order to find out under which circumstances one language dominates other ones. This model considers that only young people learn foreign languages. The simulations show a first order phase transition where the ratio between the number of speakers of different languages is the order para...
Kiran, Swathi; Iakupova, Regina
The goal of this study was to address the relationship between language proficiency, language impairment and rehabilitation in bilingual Russian-English individuals with aphasia. As a first step, we examined two Russian-English patients' pre-stroke language proficiency using a detailed and comprehensive language use and history questionnaire and…
Mcquaid, Nancy; Bigelow, Ann E.; McLaughlin, Jessica; MacLean, Kim
Mothers' mental state language in conversation with their preschool children, and children's preschool attachment security were examined for their effects on children's mental state language and expressions of emotional understanding in their conversation. Children discussed an emotionally salient event with their mothers and then relayed the…
Simmons, Noreen; Johnston, Judith
Speech-language pathologists often advise families about interaction patterns that will facilitate language learning. This advice is typically based on research with North American families of European heritage and may not be culturally suited for non-Western families. The goal of the project was to identify differences in the beliefs and practices of Indian and Euro-Canadian mothers that would affect patterns of talk to children. A total of 47 Indian mothers and 51 Euro-Canadian mothers of preschool age children completed a written survey concerning child-rearing practices and beliefs, especially those about talk to children. Discriminant analyses indicated clear cross-cultural differences and produced functions that could predict group membership with a 96% accuracy rate. Items contributing most to these functions concerned the importance of family, perceptions of language learning, children's use of language in family and society, and interactions surrounding text. Speech-language pathologists who wish to adapt their services for families of Indian heritage should remember the centrality of the family, the likelihood that there will be less emphasis on early independence and achievement, and the preference for direct instruction.
Ezen-Can, Aysu; Boyer, Kristy Elizabeth
Within the landscape of educational data, textual natural language is an increasingly vast source of learning-centered interactions. In natural language dialogue, student contributions hold important information about knowledge and goals. Automatically modeling the dialogue act of these student utterances is crucial for scaling natural language…
Murphy, Kimberly A.; Justice, Laura M.; O'Connell, Ann A.; Pentimonti, Jill M.; Kaderavek, Joan N.
Purpose: The purpose of this study was to retrospectively examine the preschool language and early literacy skills of kindergarten good and poor readers, and to determine the extent to which these skills predict reading status. Method: Participants were 136 children with language impairment enrolled in early childhood special education classrooms.…
Yi Fei Wang; Stephen Petrina
the goal of this article is to explore how learning analytics can be used to predict and advise the design of an intelligent language tutor, chatbot Lucy. With its focus on using student-produced data to understand the design of Lucy to assist English language learning, this research can be a valuable component for language-learning designers to improve second language acquisition. In this article, we present students’ learning journey and data trails, the chatting log architecture and result...
Hirschberg, Julia; Manning, Christopher D
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.
Supervision Distant supervision is a recent trend in information extraction. Distantly-supervised extractors are trained using a corpus of unlabeled text...consists of fill-in-the-blank natural language questions such as “Incan emperor ” or “Cunningham directed Auchtre’s second music video .” These questions...with an 132 unknown knowledge base, simultaneously learning how to semantically parse language and pop - ulate the knowledge base. The weakly
Seah, Lay Hoon; Clarke, David John; Hart, Christina Eugene
This case study of a science lesson, on the topic thermal expansion, examines the language demands on students from an integrated science and language perspective. The data were generated during a sequence of 9 lessons on the topic of "States of Matter" in a Grade 7 classroom (12-13 years old students). We identify the language demands…
Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)
Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na
Pons, Francisco; Lawson, J.: Harris, P.; Rosnay, M. de
Over the last two decades, it has been established that children's emotion understanding changes as they develop. Recent studies have also begun to address individual differences in children's emotion understanding. The first goal of this study was to examine the development of these individual...... differences across a wide age range with a test assessing nine different components of emotion understanding. The second goal was to examine the relation between language ability and individual differences in emotion understanding. Eighty children ranging in age from 4 to 11 years were tested. Children...... displayed a clear improvement with age in both their emotion understanding and language ability. In each age group, there were clear individual differences in emotion understanding and language ability. Age and language ability together explained 72% of emotion understanding variance; 20% of this variance...
Measures aimed at procedural fairness address conduct during the bargaining process and generally aim at ensuring transparency. Transparency in relation to the terms of a contract relates to whether the terms of the contract terms accessible, in clear language, well-structured, and cross-referenced, with prominence being ...
Tremblay, Pascale; Small, Steven L
A controversial question in cognitive neuroscience is whether comprehension of words and sentences engages brain mechanisms specific for decoding linguistic meaning or whether language comprehension occurs through more domain-general sensorimotor processes. Accumulating behavioral and neuroimaging evidence suggests a role for cortical motor and premotor areas in passive action-related language tasks, regions that are known to be involved in action execution and observation. To examine the involvement of these brain regions in language and nonlanguage tasks, we used functional magnetic resonance imaging (fMRI) on a group of 21 healthy adults. During the fMRI session, all participants 1) watched short object-related action movies, 2) looked at pictures of man-made objects, and 3) listened to and produced short sentences describing object-related actions and man-made objects. Our results are among the first to reveal, in the human brain, a functional specialization within the ventral premotor cortex (PMv) for observing actions and for observing objects, and a different organization for processing sentences describing actions and objects. These findings argue against the strongest version of the simulation theory for the processing of action-related language.
Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan
How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349
Bozorgian, Hossein; Pillay, Hitendra
Listening used in language teaching refers to a complex process that allows us to understand spoken language. The current study, conducted in Iran with an experimental design, investigated the effectiveness of teaching listening strategies delivered in L1 (Persian) and its effect on listening comprehension in L2. Five listening strategies:…
Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E
The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.
Full Text Available Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and lack core spoken language features. The aim of the present study was to explore the question whether awareness of spoken grammar features could affect learners’ comprehension of real-life conversations. To this end, 45 university students in two intact classes participated in a listening course employing corpus-based materials. The instruction of the spoken grammar features to the experimental group was done overtly through awareness raising tasks, whereas the control group, though exposed to the same materials, was not provided with such tasks for learning the features. The results of the independent samples t tests revealed that the learners in the experimental group comprehended everyday conversations much better than those in the control group. Additionally, the highly positive views of spoken grammar held by the learners, which was elicited by means of a retrospective questionnaire, were generally comparable to those reported in the literature.
Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena
The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…
Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.
In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage
Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.; Dalsgaard, P.; Lindberg, B.; Benner, H.
In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage
Kobayashi, Yuichiro; Abe, Mariko
The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…
de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.
Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and
Krahmer, E.J.; Swerts, M.G.J.; Theune, M.; Weegels, M.F.
Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,
Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.
Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,
Richard K. Payne
Full Text Available The question motivating this essay is how tantric Buddhist practitioners in Japan understood language such as to believe that mantra, dhāraṇī, and related forms are efficacious. “Extraordinary language” is introduced as a cover term for these several similar language uses found in tantric Buddhist practices in Japan. The essay proceeds to a critical examination of Anglo-American philosophy of language to determine whether the concepts, categories, and concerns of that field can contribute to the analysis and understanding of extraordinary language. However, that philosophy of language does not contribute to this analysis, as it is constrained by its continuing focus on its founding concepts, dating particularly from the work of Frege. Comparing it to Indic thought regarding language reveals a distinct mismatch, further indicating the limiting character of the philosophy of language. The analysis then turns to examine two other explanations of tantric language use found in religious studies literature: magical language and performative language. These also, however, prove to be unhelpful. While the essay is primarily critical, one candidate for future constructive study is historical pragmatics, as suggested by Ronald Davidson. The central place of extraordinary language indicates that Indic reflections on the nature of language informed tantric Buddhist practice in Japan and are not simply cultural baggage.
Miller, Amanda C; Keenan, Janice M
This study replicated and extended a phenomenon in the text memory literature referred to as the centrality deficit Miller & Keenan (Annals of Dyslexia 59:99-113, 2009). It examined how reading in a foreign language (L2) affects one's text representation and ability to recall the most important information. Readers recalled a greater proportion of central than of peripheral ideas, regardless of whether reading in their native language (L1) or a foreign language (L2). Nonetheless, the greatest deficit in participants' L2 recalls, as compared with L1 recalls, was on the central, rather than the peripheral, information. This centrality deficit appears to stem from resources being diverted from comprehension when readers have to devote more cognitive resources to lower level processes (e.g., L2 word identification and syntactic processing), because the deficit was most evident among readers who had lower L2 proficiency. Prior knowledge (PK) of the passage topic helped compensate for the centrality deficit. Readers with less L2 proficiency who did not have PK of the topic displayed a centrality deficit, relative to their L1 recall, but this deficit dissipated when they did possess PK.
Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B
Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.
Scott, Jessica C; Henderson, Annette M E
Object labels are valuable communicative tools because their meanings are shared among the members of a particular linguistic community. The current research was conducted to investigate whether 13-month-old infants appreciate that object labels should not be generalized across individuals who have been shown to speak different languages. Using a visual habituation paradigm, Experiment 1 tested whether infants would generalize a new object label that was taught to them by a speaker of a foreign language to a speaker from the infant's own linguistic group. The results suggest that infants do not expect 2 individuals who have been shown to speak different languages to use the same label to refer to the same object. The results of Experiment 2 reveal that infants do not generalize a new object label that was taught to them by a speaker of their native language to an individual who had been shown to speak a foreign language. These findings offer the first evidence that by the end of the 1st year of life, infants are sensitive to the fact that the conventional nature of language is constrained by the language that a person has been shown to speak.
Huerta, Margarita; Tong, Fuhui; Irby, Beverly J.; Lara-Alecio, Rafael
The authors of this quantitative study measured and compared the academic language development and conceptual understanding of fifth-grade economically disadvantaged English language learners (ELL), former ELLs, and native English-speaking (ES) students as reflected in their science notebook scores. Using an instrument they developed, the authors…
Elson, Raymond J.; O'Callaghan, Susanne; Walker, John P.; Williams, Robert
Students rely on rote knowledge to learn accounting concepts. However, this approach does not allow them to understanding the meta language of accounting. Meta language is simply the concepts and terms that are used in a profession and are easily understood by its users. Terms such as equity, assets, and balance sheet are part of the accounting…
The starting-point of this thesis is the hypothesis that, from at least 22 months old, children who watch movies (i.e. any moving-image media) may be learning how to make sense of them. Rather than looking for evidence of precursors to further learning (such as language, literacy or technological skills) or for the risks or benefits that movie-watching may entail, the thesis argues that viewing behaviour provides enough evidence about the practices and processes through which children of this...
Full Text Available Teachers’ practical knowledge is considered as teachers’ general knowledge, beliefsand thinking (Borg, 2003 which can be traced in teachers’ practices (Connelly & Clandinin,1988 and shaped by various background sources (Borg, 2003; Grossman, 1990; Meijer,Verloop, and Beijard, 1999. This paper initially discusses how language teachers areinfluenced by three background sources: teachers’ prior language learning experiences, priorteaching experience, and professional coursework in pre- and in-service education. Bydrawing its data from the author’s longitidunal study, it also presents the findings of a crosscasetheme emerged from the investigation of three English as a foreign language (EFLteachers’ prior language learning experiences. The paper also discusses how the participationin studies on teachers’ knowledge raises teachers’ own awareness while it informs theresearch.
Full Text Available This paper presents the results of the research of peculiarities of syntactic development, as an element of language structure on the grammatical level of children suffering from developmental dysphasia, after the completed speech pathology treatment of many years. Syntactic level at younger school age was studied by assessing language competence in the accomplishment of communicative sentence with subordinate clause. The research was performed on the samples of children at school age in regular primary schools in Belgrade. The sample comprised 160 respondents who were divided in two groups: target and comparative. The target group consisted of 60 respondents (children suffering from developmental dysphasia after the completed speech pathology treatment of many years, and the comparative group consisted of 100 respondents from regular primary school "Gavrilo Princip" in Zemun. Research results show that grammatical development of children suffering from developmental dysphasia takes place at a considerably slower rate and entails substantially more difficulties in accomplishing predication in subordinate clauses. This paper discusses the consequences which the difficulties in grammatical development can have on school achievement.
Pearson, Barbara Zurer; Conner, Tracy; Jackson, Janice E
Language difference among speakers of African American English (AAE) has often been considered language deficit, based on a lack of understanding about the AAE variety. Following Labov (1972), Wolfram (1969), Green (2002, 2011), and others, we define AAE as a complex rule-governed linguistic system and briefly discuss language structures that it shares with general American English (GAE) and others that are unique to AAE. We suggest ways in which mistaken ideas about the language variety add to children's difficulties in learning the mainstream dialect and, in effect, deny them the benefits of their educational programs. We propose that a linguistically informed approach that highlights correspondences between AAE and the mainstream dialect and trains students and teachers to understand language varieties at a metalinguistic level creates environments that support the academic achievement of AAE-speaking students. Finally, we present 3 program types that are recommended for helping students achieve the skills they need to be successful in multiple linguistic environments.
Bunta, Ferenc; Douglas, Michael; Dickson, Hanna; Cantu, Amy; Wickesberg, Jennifer; Gifford, René H.
Background: There is a critical need to understand better speech and language development in bilingual children learning two spoken languages who use cochlear implants (CIs) and hearing aids (HAs). The paucity of knowledge in this area poses a significant barrier to providing maximal communicative outcomes to a growing number of children who have…
Weimer, Amy A; Gasquoine, Philip G
Belief reasoning and emotion understanding were measured among 102 Mexican American bilingual children ranging from 4 to 7 years old. All children were tested in English and Spanish after ensuring minimum comprehension in each language. Belief reasoning was assessed using 2 false and 1 true belief tasks. Emotion understanding was measured using subtests from the Test for Emotion Comprehension. The influence of family background variables of yearly income, parental education level, and number of siblings on combined Spanish and English vocabulary, belief reasoning, and emotion understanding was assessed by regression analyses. Age and emotion understanding predicted belief reasoning. Vocabulary and belief reasoning predicted emotion understanding. When the sample was divided into language-dominant and balanced bilingual groups on the basis of language proficiency difference scores, there were no significant differences on belief reasoning or emotion understanding. Language groups were demographically similar with regard to child age, parental educational level, and family income. Results suggest Mexican American language-dominant and balanced bilinguals develop belief reasoning and emotion understanding similarly.
Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting
Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…
Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery
Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.
Philip N Stoop
Full Text Available The Consumer Protection Act 68 of 2008 came into effect on 1 April 2011. The purpose of this Act is, among other things, to promote fairness, openness and respectable business practice between the suppliers of goods or services and the consumers of such good and services. In consumer protection legislation fairness is usually approached from two directions, namely substantive and procedural fairness. Measures aimed at procedural fairness address conduct during the bargaining process and generally aim at ensuring transparency. Transparency in relation to the terms of a contract relates to whether the terms of the contract terms accessible, in clear language, well-structured, and cross-referenced, with prominence being given to terms that are detrimental to the consumer or because they grant important rights. One measure in the Act aimed at addressing procedural fairness is the right to plain and understandable language. The consumer’s right to being given information in plain and understandable language, as it is expressed in section 22, is embedded under the umbrella right of information and disclosure in the Act. Section 22 requires that notices, documents or visual representations that are required in terms of the Act or other law are to be provided in plain and understandable language as well as in the prescribed form, where such a prescription exists. In the analysis of the concept “plain and understandable language” the following aspects are considered in this article: the development of plain language measures in Australia and the United Kingdom; the structure and purpose of section 22; the documents that must be in plain language; the definition of plain language; the use of official languages in consumer contracts; and plain language guidelines (based on the law of the states of Pennsylvania and Connecticut in the United States of America.
Polišenská, Kamila; Kapalková, Svetlana; Novotková, Monika
The study aims to describe receptive language skills in children with intellectual disability (ID) and to contribute to the debate on deviant versus delayed language development. This is the 1st study of receptive skills in children with ID who speak a Slavic language, providing insight into how language development is affected by disability and also language typology. Twenty-eight Slovak-speaking children participated in the study (14 children with ID and 14 typically developing [TD] children matched on nonverbal reasoning abilities). The children were assessed by receptive language tasks targeting words, sentences, and stories, and the groups were compared quantitatively and qualitatively. The groups showed similar language profiles, with a better understanding of words, followed by sentences, with the poorest comprehension for stories. Nouns were comprehended better than verbs; sentence constructions also showed a qualitatively similar picture, although some dissimilarities emerged. Verb comprehension was strongly related to sentence comprehension in both groups and related to story comprehension in the TD group only. The findings appear to support the view that receptive language skills follow the same developmental route in children with ID as seen in younger TD children, suggesting that language development is a robust process and does not seem to be differentially affected by ID even when delayed.
Charollais, A; Marret, S; Stumpf, M-H; Lemarchand, M; Delaporte, B; Philip, E; Monom-Diverre; Guillois, B; Datin-Dorriere, V; Debillon, T; Simon, M-J; De Barace, C; Pasquet, F; Saliba, E; Zebhib, R
Clinical and radiological knowledge of language development in the former premature infant compared to the newborn allows us to argue for exploration of the sensorimotor co-factors required for proper language development. There are early representations of the maternal language in the infant's visual, auditory, and sensorimotor areas, activated or stabilized by orofacial and articulatory movements. The functional architecture of language is different for vulnerable children such as premature infants. We have already mentioned the impact of early dysfunction of the facial praxis fine motor skills in this population presenting comprehension disorders. A recent meta-analysis confirms the increasing difficulty of understanding between 3 and 12 years, questioning the quality of the initial linguistic processes. A precise analysis of language, referenced from 3 years of age, should be completed by sensorimotor tests to assess possible constraints in automating neurolinguistic foundations. The usual assessment at this age can exclude sensory disturbances and communication and offers guidance and socialization. However, a recent study shows the ineffectiveness of "language-reinforced immersion" at 2 and 3 years in a population of vulnerable children. The LAMOPRESCO study of language and motor skills in the premature infant (National PHRC 2010) has assessed language and sensorimotor skills of preterm-born (theory of speech perception." Early and accurate assessment of language and the patient's constraints should differentiate and specify management strategies for all children, whatever their background and pathologies. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Full Text Available Two studies explored young children’s understanding of the role of shared language in communication by investigating how monolingual English-speaking children interact with an English speaker, a Spanish speaker, and a bilingual experimenter who spoke both English and Spanish. When the bilingual experimenter spoke in Spanish or English to request objects, four-year-old children, but not three-year-olds, used her language choice to determine whom she addressed (e.g. requests in Spanish were directed to the Spanish speaker. Importantly, children used this cue – language choice – only in a communicative context. The findings suggest that by four years, monolingual children recognize that speaking the same language enables successful communication, even when that language is unfamiliar to them. Three-year-old children’s failure to make this distinction suggests that this capacity likely undergoes significant development in early childhood, although other capacities might also be at play.
Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford
We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.
In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and accurate use. Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...
Full Text Available Mental image directed semantic theory (MIDST has proposed an omnisensory mental image model and its description language Lmd. This language is designed to represent and compute human intuitive knowledge of space and can provide multimedia expressions with intermediate semantic descriptions in predicate logic. It is hypothesized that such knowledge and semantic descriptions are controlled by human attention toward the world and therefore subjective to each human individual. This paper describes Lmd expression of human subjective knowledge of space and its application to aware computing in cross-media operation between linguistic and pictorial expressions as spatial language understanding.
The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f
Brandt, Anthony; Gebrian, Molly; Slevc, L. Robert
Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability – one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development. PMID:22973254
Sterzuk, Andrea; Nelson, Cynthia A.
This article presents a qualitative study of five monolingual teachers' understandings of the linguistic repertoires of their multilingual students. These teachers deliver the Saskatchewan provincial curricula in English to Hutterite colony students who are users of three languages: (a) spoken Hutterisch as a home and community language, (b)…
Swaab, T.Y.; Brown, C.; Hagoort, P.
In this study the N400 component of the event-related potential was used to investigate spoken sentence understanding in Broca's and Wernicke's aphasics. The aim of the study was to determine whether spoken sentence comprehension problems in these patients might result from a deficit in the on-line
I Nengah Sudipa
Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.
Chen, Wei; Mostow, Jack; Aist, Gregory
Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…
Mast, Marion; Maier, Elisabeth; Schmitz, Birte
This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...
Full Text Available The aim of this article is to argue that the use of language in liturgy during worship services should be meaningful to contribute to persuasion in the lives of the participants in liturgy. Language is a prominent medium to convey meaning. In fact, the essence of liturgy that has to lead to the liturgy of life is in itself a meaningful act. The question regarding the meaning of worship services that people often raise is another reason why research on the influence of liturgy is crucial. This investigation is anchored in research on the importance of cognition in persuasive language use to promote attitude change. The research gathers insights from the fields of language philosophy and cognitive psychology. It is clear that the meaning of words in language can never be separated from people’s understanding of the meaning of language. Communication and communion are not opposites. In the normative phase of this investigation, perspectives from Romans 12 are offered. The renewal of the mind that leads to discernment of God’s will must also lead to a new cognition (understanding or phronesis of each believer’s place within the Body of Christ. The insights gained from language philosophy, cognitive psychology and the normative grounding make it evident that people always try to make sense of what they are experiencing and of what they are observing. The attempt to understand necessitates further reflection on the importance of cognition. Finally, practical theological perspectives are offered to indicate that cognition is important to create a meaningful liturgy. This cognition is anchored in God’s presence during worship services and, therefore, it requires meaningful words from liturgists.
Nyachwaya, James M.
The objective of this study was to examine college general chemistry students' conceptual understanding and language fluency in the context of the topic of acids and bases. 115 students worked in groups of 2-4 to complete an activity on conductometry, where they were given a scenario in which a titration of sodium hydroxide solution and dilute…
facilities. BBN is developing a series of increasingly sophisticated natural language understanding systems which will serve as an integrated interface...Haas, A.R. A Syntactic Theory of Belief and Action. Artificial Intelligence. 1986. Forthcoming.  Hinrichs, E. Temporale Anaphora im Englischen
Fedurek, Pawel; Slocombe, Katie E
Language is a uniquely human trait, and questions of how and why it evolved have been intriguing scientists for years. Nonhuman primates (primates) are our closest living relatives, and their behavior can be used to estimate the capacities of our extinct ancestors. As humans and many primate species rely on vocalizations as their primary mode of communication, the vocal behavior of primates has been an obvious target for studies investigating the evolutionary roots of human speech and language. By studying the similarities and differences between human and primate vocalizations, comparative research has the potential to clarify the evolutionary processes that shaped human speech and language. This review examines some of the seminal and recent studies that contribute to our knowledge regarding the link between primate calls and human language and speech. We focus on three main aspects of primate vocal behavior: functional reference, call combinations, and vocal learning. Studies in these areas indicate that despite important differences, primate vocal communication exhibits some key features characterizing human language. They also indicate, however, that some critical aspects of speech, such as vocal plasticity, are not shared with our primate cousins. We conclude that comparative research on primate vocal behavior is a very promising tool for deepening our understanding of the evolution of human speech and language, but much is still to be done as many aspects of monkey and ape vocalizations remain largely unexplored.
Moreno, Iván; de Vega, Manuel; León, Inmaculada
The mu rhythms (8-13 Hz) and the beta rhythms (15 up to 30 Hz) of the EEG are observed in the central electrodes (C3, Cz and C4) in resting states, and become suppressed when participants perform a manual action or when they observe another's action. This has led researchers to consider that these rhythms are electrophysiological markers of the motor neuron activity in humans. This study tested whether the comprehension of action language, unlike abstract language, modulates mu and low beta rhythms (15-20 Hz) in a similar way as the observation of real actions. The log-ratios were calculated for each oscillatory band between each condition and baseline resting periods. The results indicated that both action language and action videos caused mu and beta suppression (negative log-ratios), whereas abstract language did not, confirming the hypothesis that understanding action language activates motor networks in the brain. In other words, the resonance of motor areas associated with action language is compatible with the embodiment approach to linguistic meaning. Copyright © 2013 Elsevier Inc. All rights reserved.
Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua
Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word
It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the
Blake, Helen L; Mcleod, Sharynne; Verdon, Sarah; Fuller, Gail
Proficiency in the language of the country of residence has implications for an individual's level of education, employability, income and social integration. This paper explores the relationship between the spoken English proficiency of residents of Australia on census day and their educational level, employment and income to provide insight into multilingual speakers' ability to participate in Australia as an English-dominant society. Data presented are derived from two Australian censuses i.e. 2006 and 2011 of over 19 million people. The proportion of Australians who reported speaking a language other than English at home was 21.5% in the 2006 census and 23.2% in the 2011 census. Multilingual speakers who also spoke English very well were more likely to have post-graduate qualifications, full-time employment and high income than monolingual English-speaking Australians. However, multilingual speakers who reported speaking English not well were much less likely to have post-graduate qualifications or full-time employment than monolingual English-speaking Australians. These findings provide insight into the socioeconomic and educational profiles of multilingual speakers, which will inform the understanding of people such as speech-language pathologists who provide them with support. The results indicate spoken English proficiency may impact participation in Australian society. These findings challenge the "monolingual mindset" by demonstrating that outcomes for multilingual speakers in education, employment and income are higher than for monolingual speakers.
Hughes, Julian C
As dementia progresses problems of understanding emerge. Eventually spoken language can be lost. And yet, even into the severer stages of dementia, close carers can often understand the person in a variety of ways. Loss of language is not just a practical problem. It raises philosophical issues too. As Wittgenstein suggested, understanding entails grasping a form of life. Our understanding of agitated, pacing behaviour is similarly based on a unique history, on culture, on context. Hence, a philosophy gestures at the foundations of care. There is the potential to feel the person's meaning, even when it cannot be spoken. This is not simply by means of an alternative to language. The philosophy suggests that our engagement with the person is through and through. Understanding anyone is more like an aesthetic judgement than a cognitive act.
Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.
Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao
The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.
Topac, V; Stoicu-Tivadar, V
Patient empowerment is important in order to increase the quality of medical care and the life quality of the patients. An important obstacle for empowering patients is the language barrier the lay patient encounter when accessing medical information. To design and develop a service that will help increase the understanding of medical language for lay persons. The service identifies and explains medical terminology from a given text by annotating the terms in the original text with the definition. It is based on an original terminology interpretation engine that uses a fuzzy matching dictionary. The service was implemented in two projects: a) into the server of a tele-care system (TELEASIS) with the purpose of adapting medical text assigned by medical personnel for the assisted patients. b) Into a dedicated web site that can adapt the medical language from raw text or from existing web pages. The output of the service was evaluated by a group of persons, and the results indicate that such a system can increase the understanding of medical texts. Several design decisions were driven from the evaluation, and are being considered for future development. Other tests measuring accuracy and time performance for the fuzzy terminology recognition have been performed. Test results revealed good performance for accuracy and excellent results regarding time performance. The current version of the service increases the accessibility of medical language by explaining terminology with a good accuracy, while allowing the user to easily identify errors, in order to reduce the risk of incorrect terminology recognition.
Brentari, Diane; Coppola, Marie
How do languages emerge? What are the necessary ingredients and circumstances that permit new languages to form? Various researchers within the disciplines of primatology, anthropology, psychology, and linguistics have offered different answers to this question depending on their perspective. Language acquisition, language evolution, primate communication, and the study of spoken varieties of pidgin and creoles address these issues, but in this article we describe a relatively new and important area that contributes to our understanding of language creation and emergence. Three types of communication systems that use the hands and body to communicate will be the focus of this article: gesture, homesign systems, and sign languages. The focus of this article is to explain why mapping the path from gesture to homesign to sign language has become an important research topic for understanding language emergence, not only for the field of sign languages, but also for language in general. WIREs Cogn Sci 2013, 4:201-211. doi: 10.1002/wcs.1212 For further resources related to this article, please visit the WIREs website. Copyright © 2012 John Wiley & Sons, Ltd.
Full Text Available Th e characterization of metonymy as a conceptual tool for guiding inferencing in language has opened a new fi eld of study in cognitive linguistics and pragmatics. To appreciate the value of metonymy for pragmatic inferencing, metonymy should not be viewed as performing only its prototypical referential function. Metonymic mappings are operative in speech acts at the level of reference, predication, proposition and illocution. Th e aim of this paper is to study the role of metonymy in pragmatic inferencing in spoken discourse in televison interviews. Case analyses of authentic utterances classifi ed as illocutionary metonymies following the pragmatic typology of metonymic functions are presented. Th e inferencing processes are facilitated by metonymic connections existing between domains or subdomains in the same functional domain. It has been widely accepted by cognitive linguists that universal human knowledge and embodiment are essential for the interpretation of metonymy. Th is analysis points to the role of cultural background knowledge in understanding target meanings. All these aspects of metonymic connections are exploited in complex inferential processes in spoken discourse. In most cases, metaphoric mappings are also a part of utterance interpretation.
Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.
Goldin-Meadow, Susan; Brentari, Diane
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...
conversational agent with information exchange disabled until the end of the experiment run. The meaning of the indicator in the top- right of the agent... Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language Alun Preece∗, William...email: PreeceAD@cardiff.ac.uk †Emerging Technology Services, IBM United Kingdom Ltd, Hursley Park, Winchester, UK ‡US Army Research Laboratory, Human
Bucks, Gregory Warren
Computers have become an integral part of how engineers complete their work, allowing them to collect and analyze data, model potential solutions and aiding in production through automation and robotics. In addition, computers are essential elements of the products themselves, from tennis shoes to construction materials. An understanding of how computers function, both at the hardware and software level, is essential for the next generation of engineers. Despite the need for engineers to develop a strong background in computing, little opportunity is given for engineering students to develop these skills. Learning to program is widely seen as a difficult task, requiring students to develop not only an understanding of specific concepts, but also a way of thinking. In addition, students are forced to learn a new tool, in the form of the programming environment employed, along with these concepts and thought processes. Because of this, many students will not develop a sufficient proficiency in programming, even after progressing through the traditional introductory programming sequence. This is a significant problem, especially in the engineering disciplines, where very few students receive more than one or two semesters' worth of instruction in an already crowded engineering curriculum. To address these issues, new pedagogical techniques must be investigated in an effort to enhance the ability of engineering students to develop strong computing skills. However, these efforts are hindered by the lack of published assessment instruments available for probing an individual's understanding of programming concepts across programming languages. Traditionally, programming knowledge has been assessed by producing written code in a specific language. This can be an effective method, but does not lend itself well to comparing the pedagogical impact of different programming environments, languages or paradigms. This dissertation presents a phenomenographic research study
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…
Li, Le; Abutalebi, Jubin; Zou, Lijuan; Yan, Xin; Liu, Lanfang; Feng, Xiaoxia; Wang, Ruiming; Guo, Taomei; Ding, Guosheng
Previous neuroimaging studies have revealed that bilingualism induces both structural and functional neuroplasticity in the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (LCN), both of which are associated with cognitive control. Since these "control" regions should work together with other language regions during language processing, we hypothesized that bilingualism may also alter the functional interaction between the dACC/LCN and language regions. Here we tested this hypothesis by exploring the functional connectivity (FC) in bimodal bilinguals and monolinguals using functional MRI when they either performed a picture naming task with spoken language or were in resting state. We found that for bimodal bilinguals who use spoken and sign languages, the FC of the dACC with regions involved in spoken language (e.g. the left superior temporal gyrus) was stronger in performing the task, but weaker in the resting state as compared to monolinguals. For the LCN, its intrinsic FC with sign language regions including the left inferior temporo-occipital part and right inferior and superior parietal lobules was increased in the bilinguals. These results demonstrate that bilingual experience may alter the brain functional interaction between "control" regions and "language" regions. For different control regions, the FC alters in different ways. The findings also deepen our understanding of the functional roles of the dACC and LCN in language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.
Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980
Hitendra Pillay; Hossein Bozorgian
Listening used in language teaching refers to a complex process that allows us to understand spoken language. The current study, conducted in Iran with an experimental design, investigated the effectiveness of teaching listening strategies delivered in L1 (Persian) and its effect on listening comprehension in L2. Five listening strategies: Guessing, making inferences, identifying topics, repetition, and note-taking were taught over 14 weeks during a semester. Sixty lower intermediate female p...
Pelger, Susanne; Sigrell, Anders
Background: Feedback is one of the most significant factors for students' development of writing skills. For feedback to be successful, however, students and teachers need a common language - a meta-language - for discussing texts. Not least because in science education such a meta-language might contribute to improve writing training and feedback-giving. Purpose: The aim of this study was to explore students' perception of teachers' feedback given on their texts in two genres, and to suggest how writing training and feedback-giving could become more efficient. Sample: In this study were included 44 degree project students in biology and molecular biology, and 21 supervising teachers at a Swedish university. Design and methods: The study concerned students' writing about their degree projects in two genres: scientific writing and popular science writing. The data consisted of documented teacher feedback on the students' popular science texts. It also included students' and teachers' answers to questionnaires about writing and feedback. All data were collected during the spring of 2012. Teachers' feedback, actual and recalled - by students and teachers, respectively - was analysed and compared using the so-called Canons of rhetoric. Results: While the teachers recalled the given feedback as mainly positive, most students recalled only negative feedback. According to the teachers, suggested improvements concerned firstly the content, and secondly the structure of the text. In contrast, the students mentioned language style first, followed by content. Conclusions: The disagreement between students and teachers regarding how and what feedback was given on the students texts confirm the need of improved strategies for writing training and feedback-giving in science education. We suggest that the rhetorical meta-language might play a crucial role in overcoming the difficulties observed in this study. We also discuss how training of writing skills may contribute to
Laasonen, Marja; Smolander, Sini; Lahti-Nuuttila, Pekka; Leminen, Miika; Lajunen, Hanna-Reetta; Heinonen, Kati; Pesonen, Anu-Katriina; Bailey, Todd M; Pothos, Emmanuel M; Kujala, Teija; Leppänen, Paavo H T; Bartlett, Christopher W; Geneid, Ahmed; Lauronen, Leena; Service, Elisabet; Kunnari, Sari; Arkkila, Eva
Developmental language disorder (DLD, also called specific language impairment, SLI) is a common developmental disorder comprising the largest disability group in pre-school-aged children. Approximately 7% of the population is expected to have developmental language difficulties. However, the specific etiological factors leading to DLD are not yet known and even the typical linguistic features appear to vary by language. We present here a project that investigates DLD at multiple levels of analysis and aims to make the reliable prediction and early identification of the difficulties possible. Following the multiple deficit model of developmental disorders, we investigate the DLD phenomenon at the etiological, neural, cognitive, behavioral, and psychosocial levels, in a longitudinal study of preschool children. In January 2013, we launched the Helsinki Longitudinal SLI study (HelSLI) at the Helsinki University Hospital ( http://tiny.cc/HelSLI ). We will study 227 children aged 3-6 years with suspected DLD and their 160 typically developing peers. Five subprojects will determine how the child's psychological characteristics and environment correlate with DLD and how the child's well-being relates to DLD, the characteristics of DLD in monolingual versus bilingual children, nonlinguistic cognitive correlates of DLD, electrophysiological underpinnings of DLD, and the role of genetic risk factors. Methods include saliva samples, EEG, computerized cognitive tasks, neuropsychological and speech and language assessments, video-observations, and questionnaires. The project aims to increase our understanding of the multiple interactive risk and protective factors that affect the developing heterogeneous cognitive and behavioral profile of DLD, including factors affecting literacy development. This accumulated knowledge will form a heuristic basis for the development of new interventions targeting linguistic and non-linguistic aspects of DLD.
Pitts, Casey E; Onishi, Kristine H; Vouloumanos, Athena
Adults recognize that people can understand more than one language. However, it is unclear whether infants assume other people understand one or multiple languages. We examined whether monolingual and bilingual 20-month-olds expect an unfamiliar person to understand one or more than one language. Two speakers told a listener the location of a hidden object using either the same or two different languages. When different languages were spoken, monolinguals looked longer when the listener searched correctly, bilinguals did not; when the same language was spoken, both groups looked longer for incorrect searches. Infants rely on their prior language experience when evaluating the language abilities of a novel individual. Monolingual infants assume others can understand only one language, although not necessarily the infants' own; bilinguals do not. Infants' assumptions about which community of conventions people belong to may allow them to recognize effective communicative partners and thus opportunities to acquire language, knowledge, and culture. Copyright © 2014 Elsevier B.V. All rights reserved.
Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.
Schiff, Rachel; Saiegh-Haddad, Elinor
This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633
The purpose of this qualitative study was to discover the influence of instructional games on middle school learners' use of scientific language, concept understanding, and attitude toward learning science. The rationale for this study stemmed from the lack of research concerning the value of play as an instructional strategy for older learners. Specifically, the study focused on the ways in which 6 average ability 7th grade students demonstrated scientific language and concept use during gameplay. The data were collected for this 6-week study in a southern New Jersey suburban middle school and included audio recordings of the 5 games observed in class, written documents (e.g., student created game questions, self-evaluation forms, pre- and post-assessments, and the final quiz) interviews, and researcher field notes. Data were coded and interpreted borrowing from the framework for scientific literacy developed by Bybee (1997). Based on the findings, the framework was modified to reflect the level of scientific understanding demonstrated by the participants and categorized as: Unacquainted, Nominal, Functional, and Conceptual. Major findings suggested that the participants predominantly achieved the Functional level of scientific literacy (i.e., the ability to adequately and appropriately use scientific language in both written and oral discourse) during games. Further, it was discovered that the participants achieved the Conceptual level of scientific literacy during gameplay. Through games participants were afforded the opportunity to use common, everyday language to explore concepts, promoted through peer collaboration. In games the participants used common language to build understandings that exceeded Nominal or token use of the technical vocabulary and concepts. Additionally, the participants reported through interviews and self-evaluation forms that their attitude (patterns included: Motivation, Interest, Fun, Relief from Boredom, and an Alternate Learning
Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559
Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.
Ravid, Dorit; Zilberbuch, Shoshana
This study examined the distribution of two Hebrew nominal structures-N-N compounds and denominal adjectives-in spoken and written texts of two genres produced by 90 native-speaking participants in three age groups: eleven/twelve-year-olds (6th graders), sixteen/seventeen-year-olds (11th graders), and adults. The two constructions are later linguistic acquisitions, part of the profound lexical and syntactic changes that occur in language development during the school years. They are investigated in the context of learning how modality (speech vs. writing) and genre (biographical vs. expository texts) affect the production of continuous discourse. Participants were asked to speak and write about two topics, one biographical, describing the life of a public figure or of a friend; and another, expository, discussing one of ten topics such as the cinema, cats, or higher academic studies. N-N compounding was found to be the main device of complex subcategorization in Hebrew discourse, unrelated to genre. Denominal adjectives are a secondary subcategorizing device emerging only during the late teen years, a linguistic resource untapped until very late, more restricted to specific text types than N-N compounding, and characteristic of expository writing. Written texts were found to be denser than spoken texts lexically and syntactically as measured by number of novel N-N compounds and denominal adjectives per clause, and in older age groups this difference was found to be more pronounced. The paper contributes to our understanding of how the syntax/lexicon interface changes with age, modality and genre in the context of later language acquisition.
Casey, Laura Baylot; Bicard, David F.
Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…
Full Text Available Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008 in light of new behavioral and neural findings concerning the role of working memory capacity (WMC in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. A revised ELU model is proposed based on findings that address the relationship between WMC and (a early attention processes in listening to speech, (b signal processing in hearing aids and its effects on short-term memory, (c inhibition of speech maskers and its effect on episodic long-term memory, (d the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Full Text Available The goal of the paper is to show that language can support social and intercultural competence of both students and teachers: one of the ways to do it is teaching cultural taboos and taboo language for intercultural awareness and understanding. The current state of the art in the field points to an increasing interest in the teaching of taboos. The material we analysed consisted in 238 offensive, vulgar and obscene English words that both students and teachers should know to attain social and intercultural competence. The method used is the descriptive one. The degree of novelty is rather high in our cultural area. Results show that there are 134 offensive (slang words and expressions (referring to the country of origin or to an ethnic group, to sex and sex-related issues (sexual orientation, to race, etc., 75 vulgar words and expressions (referring to sex and sex-related issues, to body parts, to people, etc., and 29 obscene words and expressions (referring to body secretions, to sex and sex-related issues, to people, etc.. There seems to be no research limitations given the lexicographic sources that we used. The implications of teaching cultural taboos and taboo language at tertiary level concern both the students and teachers and the organisation they belong to. The paper is original and relevant given the process of globalisation.
Olson, Andrea M; Swabey, Laurie
Despite federal laws that mandate equal access and communication in all healthcare settings for deaf people, consistent provision of quality interpreting in healthcare settings is still not a reality, as recognized by deaf people and American Sign Language (ASL)-English interpreters. The purpose of this study was to better understand the work of ASL interpreters employed in healthcare settings, which can then inform on training and credentialing of interpreters, with the ultimate aim of improving the quality of healthcare and communication access for deaf people. Based on job analysis, researchers designed an online survey with 167 task statements representing 44 categories. American Sign Language interpreters (N = 339) rated the importance of, and frequency with which they performed, each of the 167 tasks. Categories with the highest average importance ratings included language and interpreting, situation assessment, ethical and professional decision making, manage the discourse, monitor, manage and/or coordinate appointments. Categories with the highest average frequency ratings included the following: dress appropriately, adapt to a variety of physical settings and locations, adapt to working with variety of providers in variety of roles, deal with uncertain and unpredictable work situations, and demonstrate cultural adaptability. To achieve health equity for the deaf community, the training and credentialing of interpreters needs to be systematically addressed.
Bedore, Lisa M; Peña, Elizabeth D; Anaya, Jissel B; Nieto, Ricardo; Lugo-Neris, Mirza J; Baron, Alisa
This study examines English performance on a set of 11 grammatical forms in Spanish-English bilingual, school-age children in order to understand how item difficulty of grammatical constructions helps correctly classify language impairment (LI) from expected variability in second language acquisition when taking into account linguistic experience and exposure. Three hundred seventy-eight children's scores on the Bilingual English-Spanish Assessment-Middle Extension (Peña, Bedore, Gutiérrez-Clellen, Iglesias, & Goldstein, 2008) morphosyntax cloze task were analyzed by bilingual experience groups (high Spanish experience, balanced English-Spanish experience, high English experience, ability (typically developing [TD] vs. LI), and grammatical form. Classification accuracy was calculated for the forms that best differentiated TD and LI groups. Children with LI scored lower than TD children across all bilingual experience groups. There were differences by grammatical form across bilingual experience and ability groups. Children from high English experience and balanced English-Spanish experience groups could be accurately classified on the basis of all the English grammatical forms tested except for prepositions. For bilinguals with high Spanish experience, it was possible to rule out LI on the basis of grammatical production but not rule in LI. It is possible to accurately identify LI in English language learners once they use English 40% of the time or more. However, for children with high Spanish experience, more information about development and patterns of impairment is needed to positively identify LI.
Arndt, Karen Barako; Schuele, C. Melanie
Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…
Full Text Available Pragmatics is the study of the relation of signs to interpreters. For English foreign language (EFL learners, the knowledge and comprehensible input of pragmatics is much needed. This paper is based on research project. The writer did the research survey by giving some respondents questionnaire. The respondent is some students from UAD, which is taken randomly. Besides using open questionnaire, the writer also got the data from in depth interview with some EFL learners, the native speaker who teaches English, and also did literature review from some books. The result of the research then gives some evidences that EFL learners difficulties in understanding the English pragmatics occurs in 1 greeting, 2 apologizing, 3 complimenting, and 4 thanking. The factors that promotes EFL learners’ difficulties in understanding because 1 the different culture and values between native speaker and learners; 2 habit that the usually use in their daily life.
Rossing, Niels Nygaard; Skrubbeltrang, Lotte Stausgaard
This essay aims to describe how actions in the football field relate to the different national teams’ and countries’ cultural understanding of football and how these actions become spoken dialects within a language of football. Inspired by Edgar Schein’s framework of culture, the Brazilian...... and Italian national team football cultures were examined. The basis of the analysis was both document and video analysis. The documents were mostly research studies and popular books on the national football cultures, while the video analysis included all matches including Italy and Brazil from the World Cup...... in 2010 and 2014. The cultural analysis showed some coherence between the national football cultures and the national teams, which suggested a national dialect with the language of the game. Each national dialect seemed to be based on different basic assumptions and to some extent specific symbolic...
Manjet Kaur Mehar Singh
Full Text Available Malaysian intercultural society is typified by three major ethnic groups mainly Malays, Chinese and Indians. Although education system is the best tool for these three major ethnic groups to work together, contemporary research reveals that there is still lack of intercultural embedding education context and national schools are seen as breeding grounds of racial polarisation. In Malaysian context, there is a gap in research that focuses on the design of a proper intercultural reading framework for national integration and such initiatives are viable through schools. The main objective of this conceptual paper is to introduce the English Language Intercultural Reading Programme (ELIRP in secondary schools to promote intercultural understanding among secondary school students. The proposed framework will facilitate the acquisition of intercultural inputs without being constrained by ideological, political, or psychological demands. This article will focus on elucidating how ELIRP could affect cognitive (knowledge and behavioural transformations to intercultural perceptions harboured by selected Form 4 students of 20 national schools in Malaysia. Keywords: behavior, knowledge, intercultural reading framework, intercultural understanding, English Language Intercultural Reading Programme, secondary school students
Nguyen, Dong-Phuong; Dogruoz, A. Seza
Multilingual speakers switch between languages in online and spoken communication. Analyses of large scale multilingual data require automatic language identification at the word level. For our experiments with multilingual online discussions, we first tag the language of individual words using
Wang, Zhen; Zechner, Klaus; Sun, Yu
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…
Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra
-, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf
Students may use the technical engineering terms without knowing what these words mean. This creates a language barrier in engineering that influences student learning. Previous research has been conducted to characterize the difference between colloquial and scientific language. Since this research had not yet been applied explicitly to…
Farrant, Brad M.; Maybery, Murray T.; Fletcher, Janet
The hypothesis that language plays a role in theory-of-mind (ToM) development is supported by a number of lines of evidence (e.g., H. Lohmann & M. Tomasello, 2003). The current study sought to further investigate the relations between maternal language input, memory for false sentential complements, cognitive flexibility, and the development of…
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
Hofmann, Kristin; Chilla, Solveig
Adopting a bimodal bilingual language acquisition model, this qualitative case study is the first in Germany to investigate the spoken and sign language development of hearing children of deaf adults (codas). The spoken language competence of six codas within the age range of 3;10 to 6;4 is assessed by a series of standardised tests (SETK 3-5,…
favour of foreign language (and culture). They also ... native language, and children are unable to learn a language not spoken ... shielding them off their mother tongue”. ..... the effect endangered language has on the existence of the owners.
Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.
This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often
Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.
This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often
Wong, Miranda Kit-Yi; So, Wing Chee
This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…
Sanden, Guro Refsum
Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...... of a company. Language policies and/or strategies can be used to regulate a company’s internal modes of communication. Language management tools can be deployed to address existing and expected language needs. Continuous feedback from the front line ensures strategic learning and reduces the risk of suboptimal...
Full Text Available Emotion comprehension is known to be a key correlate and predictor of prosociality from early childhood. The present study look at their relation within the wide theoretical construct of social understanding which includes a number of socio-emotional skills, as well as cognitive and linguistic abilities. Theory of mind, especially false-belief understanding, has been found to have positive correlations with both emotion comprehension and prosocial orientation. Similarly, language ability is known to play a key role in children’s socio-emotional development. The combined contribution of both false-belief understanding and language in explaining the relation between emotion comprehension and prosociality has yet to be investigated. Thus, in the current study, we conducted an in-depth exploration of how preschoolers’ false-belief understanding and language ability each contribute to modeling the relationship between their comprehension of emotion and their disposition to act prosocially towards others, after controlling for age and gender. Participants were 101 4-to-6 year old children (54% boys, who were administered measures of language ability, false-belief understanding, emotion comprehension and prosocial orientation. Multiple mediation analysis of the data suggested that false-belief understanding and language ability jointly and fully mediated the effect of preschoolers’ emotion comprehension on their prosocial orientation. Analysis of covariates revealed that gender exerted no statistically significant effect, while age had a trivial positive effect. Theoretical and practical implications of the findings are discussed.
Ornaghi, Veronica; Pepe, Alessandro; Grazzani, Ilaria
Emotion comprehension (EC) is known to be a key correlate and predictor of prosociality from early childhood. In the present study, we examined this relationship within the broad theoretical construct of social understanding which includes a number of socio-emotional skills, as well as cognitive and linguistic abilities. Theory of mind, especially false-belief understanding, has been found to be positively correlated with both EC and prosocial orientation. Similarly, language ability is known to play a key role in children's socio-emotional development. The combined contribution of false-belief understanding and language to explaining the relationship between EC and prosociality has yet to be investigated. Thus, in the current study, we conducted an in-depth exploration of how preschoolers' false-belief understanding and language ability each contribute to modeling the relationship between children's comprehension of emotion and their disposition to act prosocially toward others, after controlling for age and gender. Participants were 101 4- to 6-year-old children (54% boys), who were administered measures of language ability, false-belief understanding, EC and prosocial orientation. Multiple mediation analysis of the data suggested that false-belief understanding and language ability jointly and fully mediated the effect of preschoolers' EC on their prosocial orientation. Analysis of covariates revealed that gender exerted no statistically significant effect, while age had a trivial positive effect. Theoretical and practical implications of the findings are discussed.
Van Heerden, C
Full Text Available Spoken dialogue systems (SDSs) have great potential for information access in the developing world. However, the realisation of that potential requires the solution of several challenging problems, including the development of sufficiently accurate...
Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa
We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132
Sherwood, Bruce Arne
Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)
This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.
Machine translation systems often incorporate modeling assumptions motivated by properties of the language pairs they initially target. When such systems are applied to language families with considerably different properties, translation quality can deteriorate. Phrase-based machine translation
With the accelerated globalization, domestic and international communications become more frequent than ever before. As the major media of international communication, languages contact with each other more actively by day. And in the active contact any language would gradually develop and change. Pidgin language is a unique linguistic phenomenon…
Crane, Paul K; Gruhl, Jonathan C; Erosheva, Elena A; Gibbons, Laura E; McCurry, Susan M; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon
Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900-1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve.
Full Text Available The contribution of cooperative learning (CL in promoting second and foreign language learning has been widely acknowledged. Little scholarly attention, however, has been given to revealing how this teaching method works and promotes learners’ improved communicative competence. This qualitative case study explores the important role that individual accountability in CL plays in giving English as a Foreign Language (EFL learners in Indonesia the opportunity to use the target language of English. While individual accountability is a principle of and one of the activities in CL, it is currently under studied, thus little is known about how it enhances EFL learning. This study aims to address this gap by conducting a constructivist grounded theory analysis on participant observation, in-depth interview, and document analysis data drawn from two secondary school EFL teachers, 77 students in the observed classrooms, and four focal students. The analysis shows that through individual accountability in CL, the EFL learners had opportunities to use the target language, which may have contributed to the attainment of communicative competence—the goal of the EFL instruction. More specifically, compared to the use of conventional group work in the observed classrooms, through the activities of individual accountability in CL, i.e., performances and peer interaction, the EFL learners had more opportunities to use spoken English. The present study recommends that teachers, especially those new to CL, follow the preset procedure of selected CL instructional strategies or structures in order to recognize the activities within individual accountability in CL and understand how these activities benefit students.
Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/
Socorro Cláudia Tavares de Sousa
Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.
This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…
Schuit, J.; Baker, A.; Pfau, R.
Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different
Elio Jesús Cruz Rondón
Full Text Available Learning a foreign language may be a challenge for most people due to differences in the form and structure between one’s mother tongue and a new one. However, there are some tools that facilitate the teaching and learning of a foreign language, for instance, new applications for digital devices, video blogs, educational platforms, and teaching materials. Therefore, this case study aims at understanding the role of teaching materials among beginners’ level students learning English as a foreign language. After conducting five non-participant classroom observations and nine semi-structured interviews, we found that the way the teacher implemented a pedagogical intervention by integrating the four language skills, promoting interactive learning through the use of online resources, and using the course book led to a global English teaching and learning process.
Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...the process of identifying the language in a spoken speech utterance. In recent years, great improvements in LID system performance have been seen...be the case in practice. Lastly, we conduct an out-of-set experiment where VoA data from 9 other languages (Amharic, Creole, Croatian, English
Kalt, Susan E.
Spanish is one of the most widely spoken languages in the world. Quechua is the largest indigenous language family to constitute the first language (L1) of second language (L2) Spanish speakers. Despite sheer number of speakers and typologically interesting contrasts, Quechua-Spanish second language acquisition is a nearly untapped research area,…
González-Alvarez, Julio; Palomar-García, María-Angeles
Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.
.... After examining some situations in which United States and British forces carried out counterinsurgency operations, the author reveals that ground troops with foreign-language skills and cultural...
Brebner, Chris; McCormack, Paul; Liow, Susan Rickard
The phonological and morphosyntactic structures of English and Mandarin contrast maximally and an increasing number of bilinguals speak these two languages. Speech and language therapists need to understand bilingual development for children speaking these languages in order reliably to assess and provide intervention for this population. To examine the marking of verb tense in the English of two groups of bilingual pre-schoolers learning these languages in a multilingual setting where the main educational language is English. The main research question addressed was: are there differences in the rate and pattern of acquisition of verb-tense marking for English-language 1 children compared with Mandarin-language 1 children? Spoken language samples in English from 481 English-Mandarin bilingual children were elicited using a 10-item action picture test and analysed for each child's use of verb tense markers: present progressive '-ing', regular past tense '-ed', third-person singular '-s', and irregular past tense and irregular past-participle forms. For 4-6 year olds the use of inflectional markers by the different language dominance groups was compared statistically using non-parametric tests. This study provides further evidence that bilingual language development is not the same as monolingual language development. The results show that there are very different rates and patterns of verb-tense marking in English for English-language 1 and Mandarin-language 1 children. Furthermore, they show that bilingual language development in English in Singapore is not the same as monolingual language development in English, and that there are differences in development depending on language dominance. Valid and reliable assessment of bilingual children's language skills needs to consider the characteristics of all languages spoken, obtaining accurate information on language use over time and accurately establishing language dominance is essential in order to make a
Kenyan Sign Language (KSL) is a visual gestural language used by members of the deaf community in Kenya. Kiswahili on the other hand is a Bantu language that is used as the national language of Kenya. The two are world's apart, one being a spoken language and the other a signed language and thus their “… basic ...
Benjamin O. Ladd
Full Text Available Introduction: Change talk (CT and sustain talk (ST are thought to reflect underlying motivation and be important mechanisms of behavior change (MOBCs. However, greater specificity and experimental rigor is needed to establish CT and ST as MOBCs. Testing the effects of self-directed language under laboratory conditions is one promising avenue. The current study presents a replication and extension of research examining the feasibility for using simulation tasks to elicit self-directed language. Methods: First-year college students (N=92 responded to the Collegiate Simulated Intoxication Digital Elicitation, a validated task for assessing decision-making in college drinking. Verbal responses elicited via free-response and structured interview formats were coded based on established definitions of CT and ST, with minor modifications to reflect the non-treatment context. Associations between self-directed language and alcohol use at baseline and eight months were examined. Additionally, this study examined whether a contextually-based measure of decision-making, behavioral willingness, mediated relationships between self-directed language and alcohol outcome. Results: Healthy talk and unhealthy talk independently were associated with baseline alcohol use across both elicitation formats. Only healthy talk during the free-response elicitation was associated with alcohol use at follow up; both healthy talk and unhealthy talk during the interview elicitation were associated with 8-month alcohol use. Behavioral willingness significantly mediated the relationship between percent healthy talk and alcohol outcome. Conclusions: Findings support the utility of studying self-directed language under laboratory conditions and suggest that such methods may provide a fruitful strategy to further understand the role of self-directed language as a MOBC. Keywords: Change talk, College students, Alcohol, Simulation task
Byers-Heinlein, Krista; Chen, Ke Heng; Xu, Fei
Languages function as independent and distinct conventional systems, and so each language uses different words to label the same objects. This study investigated whether 2-year-old children recognize that speakers of their native language and speakers of a foreign language do not share the same knowledge. Two groups of children unfamiliar with Mandarin were tested: monolingual English-learning children (n=24) and bilingual children learning English and another language (n=24). An English speaker taught children the novel label fep. On English mutual exclusivity trials, the speaker asked for the referent of a novel label (wug) in the presence of the fep and a novel object. Both monolingual and bilingual children disambiguated the reference of the novel word using a mutual exclusivity strategy, choosing the novel object rather than the fep. On similar trials with a Mandarin speaker, children were asked to find the referent of a novel Mandarin label kuò. Monolinguals again chose the novel object rather than the object with the English label fep, even though the Mandarin speaker had no access to conventional English words. Bilinguals did not respond systematically to the Mandarin speaker, suggesting that they had enhanced understanding of the Mandarin speaker's ignorance of English words. The results indicate that monolingual children initially expect words to be conventionally shared across all speakers-native and foreign. Early bilingual experience facilitates children's discovery of the nature of foreign language words. Copyright © 2013 Elsevier Inc. All rights reserved.
D'Souza, Dean; Filippi, Roberto
The ability to acquire language is a critical part of human development. Yet there is no consensus on how the skill emerges in early development. Does it constitute an innately-specified, language-processing module or is it acquired progressively? One of Annette Karmiloff-Smith's (1938-2016) key contributions to developmental science addresses…
Drawing on institutional theory, this study describes how cognitive, normative, and regulative mechanisms shape bilingual teachers' language policy implementation in both English-only and bilingual contexts. Aligned with prior educational language policy research, findings indicate the important role that teachers' beliefs play in the policy…
Sauerland, Uli; Grohmann, Kleanthes K.; Guasti, Maria Teresa; Andelkovic, Darinka; Argus, Reili; Armon-Lotem, Sharon; Arosio, Fabrizio; Avram, Larisa; Costa, João; Dabašinskiene, Ineta; de López, Kristine; Gatt, Daniela; Grech, Helen; Haman, Ewa; van Hout, Angeliek; Hrzica, Gordana; Kainhofer, Judith; Kamandulyte-Merfeldiene, Laura; Kunnari, Sari; Kovacevic, Melita; Kuvac Kraljevic, Jelena; Lipowska, Katarzyna; Mejias, Sandrine; Popovic, Maša; Ruzaite, Jurate; Savic, Maja; Sevcenco, Anca; Varlokosta, Spyridoula; Varnava, Marina; Yatsushiro, Kazuko
The comprehension of constituent questions is an important topic for language acquisition research and for applications in the diagnosis of language impairment. This article presents the results of a study investigating the comprehension of different types of questions by 5-year-old, typically developing children across 19 European countries, 18…
Wächter, Mirko; Ovchinnikova, Ekaterina; Wittenbeck, Valerij
We propose an approach for instructing a robot using natural language to solve complex tasks in a dynamic environment. In this study, we elaborate on a framework that allows a humanoid robot to understand natural language, derive symbolic representations of its sensorimotor experience, generate....... The framework is implemented within the robot development environment ArmarX. We evaluate the framework on the humanoid robot ARMAR-III in the context of two experiments: a demonstration of the real execution of a complex task in the kitchen environment on ARMAR-III and an experiment with untrained users...
Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo
window focused over the part which most likely contains an answer to the query. The two systems are integrated into a full spoken query answering system. The prototype can answer queries and questions within the chosen football (soccer) test domain, but the system has the flexibility for being ported...
PARKER, GARY J.; SOLA, DONALD F.
THE ESSENTIALS OF AYACUCHO GRAMMAR WERE PRESENTED IN THE FIRST VOLUME OF THIS SERIES, SPOKEN AYACUCHO QUECHUA, UNITS 1-10. THE 10 UNITS IN THIS VOLUME (11-20) ARE INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE, AND PRESENT THE STUDENT WITH LENGTHIER AND MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS AS WELL…
SOLA, DONALD F.; AND OTHERS
THIS SECOND VOLUME OF AN INTRODUCTORY COURSE IN SPOKEN CUZCO QUECHUA ALSO COMPRISES ENOUGH MATERIAL FOR ONE INTENSIVE SUMMER SESSION COURSE OR ONE SEMESTER OF SEMI-INTENSIVE INSTRUCTION (120 CLASS HOURS). THE METHOD OF PRESENTATION IS ESSENTIALLY THE SAME AS IN THE FIRST VOLUME WITH FURTHER CONTRASTIVE, LINGUISTIC ANALYSIS OF ENGLISH-QUECHUA…
LASTRA, YOLANDA; SOLA, DONALD F.
UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…
PARKER, GARY J.; SOLA, DONALD F.
THIS BEGINNING COURSE IN AYACUCHO QUECHUA, SPOKEN BY ABOUT A MILLION PEOPLE IN SOUTH-CENTRAL PERU, WAS PREPARED TO INTRODUCE THE PHONOLOGY AND GRAMMAR OF THIS DIALECT TO SPEAKERS OF ENGLISH. THE FIRST OF TWO VOLUMES, IT SERVES AS A TEXT FOR A 6-WEEK INTENSIVE COURSE OF 20 CLASS HOURS A WEEK. THE AUTHORS COMPARE AND CONTRAST SIGNIFICANT FEATURES OF…
Thomas, Earl W.
This is a first-year text of Portuguese grammar based on the Portuguese of moderately educated Brazilians from the area around Rio de Janeiro. Spoken idiomatic usage is emphasized. An important innovation is found in the presentation of verb tenses; they are presented in the order in which the native speaker learns them. The text is intended to…
Ordelman, Roeland J.F.; Heeren, W.F.L.; Huijbregts, M.A.H.; Hiemstra, Djoerd; de Jong, Franciska M.G.; Larson, M; Fernie, K; Oomen, J; Cigarran, J.
This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken word archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, the least we want to be
Larson, M; Ordelman, Roeland J.F.; Heeren, W.F.L.; Fernie, K; de Jong, Franciska M.G.; Huijbregts, M.A.H.; Oomen, J; Hiemstra, Djoerd
This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken heritage archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, we at least want to
This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…
Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina
The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....
This study addresses the issue of promoting effective Business Spoken English of Enterprise Staff in China.It aims to assess the assessment of spoken English learning methods and identify the difficulties of learning English oral expression concerned business area.It also provides strategies for enhancing Enterprise Staff’s level of Business Spoken English.
Sridhar, Kamal K.
Language and literacy issues in India are reviewed in terms of background, steps taken to combat illiteracy, and some problems associated with literacy. The following facts are noted: India has 106 languages spoken by more than 685 million people, there are several minor script systems, a major language has different dialects, a language may use…
Varelas, Maria; Pappas, Christine; Barry, Anne; O'Neill, Amy
Presents units that address states of matter and changes of states of matter linked with the water cycle and integrates literacy and science. Discusses the language in science books. Lists characteristics of good science inquiry units. (Contains 11 references.) (ASK)
Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.
Individual classroom experiences: a sociocultural comparison for understanding efl classroom language learning Individual classroom experiences: a sociocultural comparison for understanding efl classroom language learning
Full Text Available Este trabalho compara as experiências de sala de aula (ESA de duas universitárias na aprendizagem de língua inglesa. As ESA emergiram de entrevistas individuais, onde vídeos das aulas promoveram a reflexão. A análise revelou que experiências de natureza cognitiva, social ou afetiva influem diretamente no processo de aprendizagem e as que se referem ao contexto, à história, crenças e metas dos alunos influem indiretamente no mesmo. A singularidade de algumas experiências levou à sua categorização como ESA individuais (ESAI. Ao comparar as ESAI de duas informantes, a importância da análise sociocultural do processo de aprendizagem de sala de aula fica evidente. Concluiremos com uma defesa do valor da teoria sociocultural no estudo da aprendizagem de língua estrangeira em sala de aula e com a apresentação das implicações deste estudo para pesquisadores e professores. This paper compares the classroom experiences (CEs of two university students in their process of learning English as a foreign language (EFL. The CEs emerged from individual interviews, where classroom videos promoted reflection. The analysis revealed that cognitive, social and affective experiences directly influence the learning process and that those which refer to setting, learner’s personal background, beliefs and goal influence the learning process indirectly. The analysis also revealed the singularity of some of these CEs that led to their categorization as individual CEs (ICEs. When comparing the ICEs of the two participants, the importance of a sociocultural analysis of the classroom learning process becomes evident. We conclude with an analysis of the value of sociocultural theory in the study of classroom EFL learning and with the implications of this study for teachers and researchers.
Full Text Available Although a significant body of research has investigated the relationships among children’s emotion understanding (EU, theory of mind (ToM, and language abilities. As far as we know, no study to date has been conducted with a sizeable sample of both preschool and school-age children exploring the direct effect of EU on ToM when the role of language was evaluated as a potential exogenous factor in a single comprehensive model. Participants in the current study were 389 children (age range: 37–97 months, M = 60.79 months; SD = 12.66, to whom a False-Belief understanding battery, the Test of Emotion Comprehension, and the Peabody Test were administered. Children’s EU, ToM, and language ability (receptive vocabulary were positively correlated. Furthermore, EU scores explained variability in ToM scores independently of participants’ age and gender. Finally, language was found to play a crucial role in both explaining variance in ToM scores and in mediating the relationship between EU and ToM. We discuss the theoretical and educational implications of these outcomes, particularly in relation to offering social and emotional learning programs through schools.
Grazzani, Ilaria; Ornaghi, Veronica; Conte, Elisabetta; Pepe, Alessandro; Caprin, Claudia
Although a significant body of research has investigated the relationships among children's emotion understanding (EU), theory of mind (ToM), and language abilities. As far as we know, no study to date has been conducted with a sizeable sample of both preschool and school-age children exploring the direct effect of EU on ToM when the role of language was evaluated as a potential exogenous factor in a single comprehensive model. Participants in the current study were 389 children (age range: 37-97 months, M = 60.79 months; SD = 12.66), to whom a False-Belief understanding battery, the Test of Emotion Comprehension, and the Peabody Test were administered. Children's EU, ToM, and language ability (receptive vocabulary) were positively correlated. Furthermore, EU scores explained variability in ToM scores independently of participants' age and gender. Finally, language was found to play a crucial role in both explaining variance in ToM scores and in mediating the relationship between EU and ToM. We discuss the theoretical and educational implications of these outcomes, particularly in relation to offering social and emotional learning programs through schools.
Wiseheart, Rebecca; Altmann, Lori J P
Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.
Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.
Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…
Thomas, Joyce; McDonagh, Deana
The ability to communicate to others and express ourselves is a basic human need. As we develop our understanding of the world, based on our upbringing, education and so on, our perspective and the way we communicate can differ from those around us. Engaging and interacting with others is a critical part of healthy living. It is the responsibility of the individual to ensure that they are understood in the way they intended.Shared language refers to people developing understanding amongst themselves based on language (e.g. spoken, text) to help them communicate more effectively. The key to understanding language is to first notice and be mindful of your language. Developing a shared language is an ongoing process that requires intention and time, which results in better understanding.Shared language is critical to collaboration, and collaboration is critical to business and education. With whom and how many people do you connect? Your 'shared language' makes a difference in the world. So, how do we successfully do this? This paper shares several strategies.Your sphere of influence will carry forward what and how you are communicating. Developing and nurturing a shared language is an essential element to enhance communication and collaboration whether it is simply between partners or across the larger community of business and customers. Constant awareness and education is required to maintain the shared language. We are living in an increasingly smaller global community. Business is built on relationships. If you invest in developing shared language, your relationships and your business will thrive.
Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary
Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…
Verhoeven, Ludo; Steenge, Judit; van Leeuwe, Jan; van Balkom, Hans
In this study, we investigated which componential skills can be distinguished in the second language (L2) development of 140 bilingual children with specific language impairment in the Netherlands, aged 6-11 years, divided into 3 age groups. L2 development was assessed by means of spoken language tasks representing different language skills…
This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…
Full Text Available This paper explores the auditory lexical access of mono-morphemic compounds in Chinese as a way of understanding the role of orthography in the recognition of spoken words. In traditional Chinese linguistics, a compound is a word written with two or more characters whether or not they are morphemic. A monomorphemic compound may either be a binding word, written with characters that only appear in this one word, or a non-binding word, written with characters that are chosen for their pronunciation but that also appear in other words. Our goal was to determine if this purely orthographic difference affects auditory lexical access by conducting a series of four experiments with materials matched by whole-word frequency, syllable frequency, cross-syllable predictability, cohort size, and acoustic duration, but differing in binding. An auditory lexical decision task (LDT found an orthographic effect: binding words were recognized more quickly than non-binding words. However, this effect disappeared in an auditory repetition and in a visual LDT with the same materials, implying that the orthographic effect during auditory lexical access was localized to the decision component and involved the influence of cross-character predictability without the activation of orthographic representations. This claim was further confirmed by overall faster recognition of spoken binding words in a cross-modal LDT with different types of visual interference. The theoretical and practical consequences of these findings are discussed.
Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice
These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.
Alimi, Modupe M.
Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…
Charity Hudley, Anne H.; Mallinson, Christine
In today's culturally diverse classrooms, students possess and use many culturally, ethnically, and regionally diverse English language varieties that may differ from standardized English. This book helps classroom teachers become attuned to these differences and offers practical strategies to support student achievement while fostering positive…
Prizant, Barry M.
The paper examines theoretical issues regarding the symptomatology of echolalia in the language of visually impaired children. Literature on echolalia is reviewed from a variety of perspectives and clinical work and research with visual impairment and with autism is discussed. Problems of definition are cited, and explanations for occurrence of…
DeKeyser, Robert M.
The effect of age of acquisition on ultimate attainment in second language learning has been a controversial topic for years. After providing a very brief overview of the ideas that are at the core of the controversy, I discuss the two main reasons why these issues are so controversial: conceptual misunderstandings and methodological difficulties.…
Mainela-Arnold, Elina; Evans, Julia L.; Alibali, Martha W.
Purpose: The authors investigated mental representations of Piagetian conservation tasks in children with specific language impairment (SLI) and typically developing peers. Children with SLI have normal nonverbal intelligence; however, they exhibit difficulties in Piagetian conservation tasks. The authors tested the hypothesis that conservation…
The prevalence of academic procrastination has long been the subject of attention among researchers. However, there is still a paucity of studies examining language learners since most of the studies focus on similar participants such as psychology students. The present study was conducted among students trying to learn English in the first year…
Full Text Available The paper deals with the spoken language technologies that could enable the so-called smart (intelligent surveillance systems to listen, understand and speak Slovenian in the near future. Advanced computational methods of artificial perception and pattern recognition enable such systems to be at least to some extent aware of the environment, the presence of people and other phenomena that could be subject to surveillance. Speech is one such phenomenon that has the potential to be a key source of information in certain security situations. Technologies that enable automatic speech and speaker recognition as well as their psychophysical state by computer analysis of acoustic speech signals provide an entirely new dimension to the development of smart surveillance systems. Automatic recognition of spoken threats, screaming and crying for help, as well as a suspicious psycho-physical state of a speaker provide such systems to some extent with intelligent behaviour. The paper investigates the current state of development of these technologies and the requirements and possibilities of these systems to be used for the Slovenian spoken language, as well as different possible security application scenarios. It also addresses the broader legal and ethical issues raised by the development and use of such technologies, especially as audio surveillance is one of the most sensitive issues of privacy protection.
Taylor, Blaine J.
Many advocates of the deaf fear that a whole generation of deaf children will be lost emotionally. socially. and educationally. This fear stems from the fact that many children who are deaf are not having their linguistic. sociocultural. and communicative needs met at home or at school (King, 1993). Their needs are not met primarily for three reasons. First. the hearing culture is often inaccessible to them because they do not understand most of the spoken language around them. When children ...
A brief review of Indian education focuses on special problems caused by overcrowded schools, insufficient funding, and the status of education itself in the Indian social structure. Language instruction in India, a complex issue due largely to the numerous official languages currently spoken, is commented on with special reference to the problem…
This volume examines mathematics as a product of the human mind and analyzes the language of "pure mathematics" from various advanced-level sources. Through analysis of the foundational texts of mathematics, it is demonstrated that math is a complex literary creation, containing objects, actors, actions, projection, prediction, planning, explanation, evaluation, roles, image schemas, metonymy, conceptual blending, and, of course, (natural) language. The book follows the narrative of mathematics in a typical order of presentation for a standard university-level algebra course, beginning with analysis of set theory and mappings and continuing along a path of increasing complexity. At each stage, primary concepts, axioms, definitions, and proofs will be examined in an effort to unfold the tell-tale traces of the basic human cognitive patterns of story and conceptual blending. This book will be of interest to mathematicians, teachers of mathematics, cognitive scientists, cognitive linguists, and anyone interested...
Full Text Available Conceptual knowledge accessed by language may involve the re-activation of the associated primary sensory-motor processes. Whether these embodied representations are indeed constitutive to conceptual knowledge is hotly debated, particularly since direct evidence that sensory-motor expertise can improve conceptual processing is scarce.In this study, we sought for this crucial piece of evidence, by training naive healthy subjects to perform complex manual actions and by measuring, before and after training, their performance in a semantic language task. 19 participants engaged in 3 weeks of motor training. Each participant was trained in 3 complex manual actions (e.g. origami. Before and after the training period, each subject underwent a series of manual dexterity tests and a semantic language task. The latter consisted of a sentence-picture semantic congruency judgment task, with 6 target congruent sentence-picture pairs (semantically related to the trained manual actions, 6 non-target congruent pairs (semantically unrelated, and 12 filler incongruent pairs.Manual action training induced a significant improvement in all manual dexterity tests, demonstrating the successful acquisition of sensory-motor expertise. In the semantic language task, the reaction times to both target and non-target congruent sentence-image pairs decreased after action training, indicating a more efficient conceptual-semantic processing. Noteworthy, the reaction times for target pairs decreased more than those for non-target pairs, as indicated by the 2x2 interaction. These results were confirmed when controlling for the potential bias of increased frequency of use of target lexical items during manual training.The results of the present study suggest that sensory-motor expertise gained by training of specific manual actions can lead to an improvement of cognitive-linguistic skills related to the specific conceptual-semantic domain associated to the trained actions.
Mayberry, Marshall R.; Crocker, Matthew W.
The Adaptive Mechanisms in Human Language Processing (ALPHA) project features both experimental and computational tracks designed to complement each other in the investigation of the cognitive mechanisms that underlie situated human utterance processing. The models developed in the computational track replicate results obtained in the experimental track and, in turn, suggest further experiments by virtue of behavior that arises as a by-product of their operation.
Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process. This book examines how user models can be used to support such early evaluations in two ways: by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed. How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...
Full Text Available This study investigates international students’ perceptions of the issues they face using English as a second language while attending American higher education institutions. In order to fully understand those challenges involved in learning English as a Second Language, it is necessary to know the extent to which international students have mastered the English language before they start their study in America. Most international students experience an overload of English language input upon arrival in the United States. Cultural differences influence international students’ learning of English in other ways, including international students’ isolation within their communities and America’s lack of teaching listening skills to its own students. Other factors also affect international students’ learning of English, such as the many forms of informal English spoken in the USA, as well as a variety of dialects. Moreover, since most international students have learned English in an environment that precluded much contact with spoken English, they often speak English with an accent that reveals their own language. This study offers informed insight into the complicated process of simultaneously learning the language and culture of another country. Readers will find three main voices in addition to the international students who “speak” (in quotation marks throughout this article. Hong Li, a Chinese doctoral student in English Education at the University of Missouri-Columbia, authored the “regular” text. Second, Roy F. Fox’s voice appears in italics. Fox is Professor of English Education and Chair of the Department of Learning, Teaching, and Curriculum at the University of Missouri-Columbia. Third, Dario J. Almarza’s voice appears in boldface. Almarza, a native of Venezuela, is an Assistant Professor of Social Studies Education at the same institution.
Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.
Gloria Avendaño de Barón
Full Text Available This article presents the results of a research project whose aims were the following: to determine the frequency of the use of pronoun forms in polite treatment sumercé, usted and tú, according to differences in gender, age and level of education, among speakers in Tunja; to describe the sociodiscursive variations and to explain the relationship between usage and courtesy. The methodology of the Project for the Sociolinguistic Study of Spanish in Spain and in Latin America (PRESEEA was used, and a sample of 54 speakers was taken. The results indicate that the most frequently used pronoun in Tunja to express friendliness and affection is sumercé, followed by usted and tú; women and men of different generations and levels of education alternate the use of these three forms in the context of narrative, descriptive, argumentative and explanatory speech.
Full Text Available A speech processing system is often required to perform in a different environment than the one for which it was initially developed. In such a case, data from the new environment may be more limited in quantity and of poorer quality than...
10. 10. Content. 25. Conveying interaction between lecturer and students. 5. 5. Managing equipment. Breath control and volume. 5. Intonation and voice quality. Use of coping techniques. Pronunciation and general clarity. Error correction. Fluency of interpreting product (hesitations, silences etc.) Interpreting competency. 15.
Full Text Available Speech Recognition (ASR) systems in the developing world is severely inhibited. Given that few task-specific corpora exist and speech technology systems perform poorly when deployed in a new environment, we investigate the use of acoustic model adaptation...
André, Elisabeth; Rehm, Matthias; Minker, Wolfgang
While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic variations in order to improve the user’s perception of the interaction. To accomplish this goal, our approach integrates a social theory of polite...... of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available....
Similarly, we included derivations (mostly plurals and possessives) of many open-class words in the domnain. We also added about 400 concatenated word...UueraiCe’l~ usinig a system of’ ’realization 1111C, %%. hiCh map) thle gr-aimmlatcal relation anl argumlent bears to the head onto thle semantic relatio ...syntactic categories as well. Representations of this form contain significantly more internal structure than specialized sublanguage models. This can be
The Dravidian language family consists of about 80 varieties (Hammarström H. 2016 Glottolog 2.7) spoken by 220 million people across southern and central India and surrounding countries (Steever SB. 1998 In The Dravidian languages (ed. SB Steever), pp. 1–39: 1). Neither the geographical origin of the Dravidian language homeland nor its exact dispersal through time are known. The history of these languages is crucial for understanding prehistory in Eurasia, because despite their current restricted range, these languages played a significant role in influencing other language groups including Indo-Aryan (Indo-European) and Munda (Austroasiatic) speakers. Here, we report the results of a Bayesian phylogenetic analysis of cognate-coded lexical data, elicited first hand from native speakers, to investigate the subgrouping of the Dravidian language family, and provide dates for the major points of diversification. Our results indicate that the Dravidian language family is approximately 4500 years old, a finding that corresponds well with earlier linguistic and archaeological studies. The main branches of the Dravidian language family (North, Central, South I, South II) are recovered, although the placement of languages within these main branches diverges from previous classifications. We find considerable uncertainty with regard to the relationships between the main branches. PMID:29657761
Full Text Available The goal of this project in Estonia was to determine what languages are spoken by students from the 2nd to the 5th year of basic school at their homes in Tallinn, the capital of Estonia. At the same time, this problem was also studied in other segregated regions of Estonia: Kohtla-Järve and Maardu. According to the database of the population census from the year 2000 (Estonian Statistics Executive Office's census 2000, there are representatives of 142 ethnic groups living in Estonia, speaking a total of 109 native languages. At the same time, the database doesn’t state which languages are spoken at homes. The material presented in this article belongs to the research topic “Home Language of Basic School Students in Tallinn” from years 2007–2008, specifically financed and ordered by the Estonian Ministry of Education and Research (grant No. ETF 7065 in the framework of an international study called “Multilingual Project”. It was determined what language is dominating in everyday use, what are the factors for choosing the language for communication, what are the preferred languages and language skills. This study reflects the actual trends of the language situation in these cities.
easily transformed into a regrettable mistake (don’t cry over spilt milk ) if G is not characterized as a fleeting goal and a recovery plan therefore...technical literature is characterized by very dry and literal language. If there is one place where metaphors might not intrude, it must be when people...from the point of view of both evidential support and falsification ? I ask it because you didn’t say anything about it. A: Well, I think there’s a lot
Larsen, Lars Bo
This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...
Rodd, Jennifer M.; Longe, Olivia A.; Randall, Billi; Tyler, Lorraine K.
Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities…
Lobel, Jason William; Paputungan, Ade Tatak
This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…
Roy-Campbell, Zaline M.
English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…
Dekker, Diane; Young, Catherine
There are more than 6000 languages spoken by the 6 billion people in the world today--however, those languages are not evenly divided among the world's population--over 90% of people globally speak only about 300 majority languages--the remaining 5700 languages being termed "minority languages". These languages represent the…
Full Text Available This paper describes the phonology of the Sida language, a Tibeto-Burman language spoken by approximately 3,900 people in Laos and Vietnam. The data presented here are the variety spoken in Luang Namtha province of northwestern Laos, and focuses on a synchronic description of the fundamentals of the Sida phonological systems. Several issues of diachronic interest are also discussed in the context of the diversity of the Southern Loloish group of languages, many of which are spoken in Laos and have not yet been described in detail.
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
This work contains the first comprehensive description of Abui, a language of the Trans New Guinea family spoken approximately by 16,000 speakers in the central part of the Alor Island in Eastern Indonesia. The description focuses on the northern dialect of Abui as spoken in the village
Lindhe, Christina; Hartelius, Lena
The aim of the study was to describe the subjective ratings of the course 'Training of the student's own voice and speech', from a student-centred perspective. A questionnaire was completed after each of the six individual sessions. Six speech and language pathology (SLP) students rated how they perceived the practical exercises in terms of doing and understanding. The results showed that five of the six participants rated the exercises as significantly easier to understand than to do. The exercises were also rated as easier to do over time. Results are interpreted within in a theoretical framework of approaches to learning. The findings support the importance of both the physical and reflective aspects of the voice training process.
Friedrich (Fritz W. de Wet
Full Text Available Venturing to speak the biblical language of the kingdom of God, with its distinct covenantal intensity, in the context of a South African society in transition from paternalistic power structures to liberal democratic structures is not easy. How should the language of the kingdom of God be spoken in a society that demands ‘non-intrusive’ and ‘politically correct’ speech without – in the process – rendering the intense intentionality of its covenantal roots to that of a speech without zeal? Having to face the daunting task of ‘translating’ kingdom language into a type of language that suits the present-day context without sacrificing or diminishing its powerful intentionality demands the development of a new sensitivity. Such a sensitivity is required to incentivise the accommodation of the dimensions of truthful, authoritative and authentic communication in spoken language. In this research article, the implications of the speech act theory, as pioneered by scholars such as J.L. Austin and J. Searle, are utilised to identify possible markers for such a venture. Insight into the locutionary, illocutionary and perlocutionary dimensions present in speech acts is indicated as a relevant starting point for attempting to obtain a more comprehensive and perspective-rich understanding into speaking the language of the kingdom of God in a way that fits the present South African context.
Ruohotie-Lyhty, Maria; Korppi, Aino; Moate, Josephine; Nyman, Tarja
Teaching is recognised as an emotional practice. Studies have highlighted the importance of teachers' emotional literacy in the development of pupils' emotional skills, the central position of emotions in teachers' ways of knowing, and in their professional development. This longitudinal study draws on a dialogic understanding of emotion to…
van der Kroon, Linda; Jauregi Ondarra, M.K.; ten Thije, J.D.
The development of intercultural communicative competence is increasingly important in this globalised and highly digitalised world. This implies the adequate understanding of otherness, which entails a myriad of complex cognitive competences, skills and behaviour. The TILA project aims to study how
Wekesa, Duncan Wasike
Mathematical knowledge and understanding is important not only for scientific progress and development but also for its day-to-day application in social sciences and arts, government, business and management studies and household chores. But the general performance in school mathematics in Kenya has been poor over the years. There is evidence that…
Rappleye, Jeremy; Imoto, Yuki; Horiguchi, Sachiko
Globalisation and convergence in educational policy worldwide has reinvigorated, while rendering more complex, the classic theme of educational transfer. Framed by this wider pursuit of new understandings of a changing transfer/context puzzle, this paper explores how an ethnographic "thick description" might complement and extend recent…
Bucks, Gregory Warren
Computers have become an integral part of how engineers complete their work, allowing them to collect and analyze data, model potential solutions and aiding in production through automation and robotics. In addition, computers are essential elements of the products themselves, from tennis shoes to construction materials. An understanding of how…
In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...
Palmer, Barbara C.; Chen, Chia-I; Chang, Sara; Leclere, Judith T.
According to the 2000 United States Census, Americans age five and older who speak a language other than English at home grew 47 percent over the preceding decade. This group accounts for slightly less than one in five Americans (17.9%). Among the minority languages spoken in the United States, Asian-language speakers, including Chinese and other…
Carter, Ronald; McCarthy, Michael
This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…
This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,
Corneli, Joseph; Corneli, Miriam
"Natural Language," whether spoken and attended to by humans, or processed and generated by computers, requires networked structures that reflect creative processes in semantic, syntactic, phonetic, linguistic, social, emotional, and cultural modules. Being able to produce novel and useful behavior following repeated practice gets to the root of both artificial intelligence and human language. This paper investigates the modalities involved in language-like applications that computers -- and ...
Juan Manuel Montero
Full Text Available We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO. In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA. NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users’ frustrations and, ultimately, improving their satisfaction.
Van Rinsveld, Amandine; Schiltz, Christine; Landerl, Karin; Brunner, Martin; Ugen, Sonja
Differences between languages in terms of number naming systems may lead to performance differences in number processing. The current study focused on differences concerning the order of decades and units in two-digit number words (i.e., unit-decade order in German but decade-unit order in French) and how they affect number magnitude judgments. Participants performed basic numerical tasks, namely two-digit number magnitude judgments, and we used the compatibility effect (Nuerk et al. in Cognition 82(1):B25-B33, 2001) as a hallmark of language influence on numbers. In the first part we aimed to understand the influence of language on compatibility effects in adults coming from German or French monolingual and German-French bilingual groups (Experiment 1). The second part examined how this language influence develops at different stages of language acquisition in individuals with increasing bilingual proficiency (Experiment 2). Language systematically influenced magnitude judgments such that: (a) The spoken language(s) modulated magnitude judgments presented as Arabic digits, and (b) bilinguals' progressive language mastery impacted magnitude judgments presented as number words. Taken together, the current results suggest that the order of decades and units in verbal numbers may qualitatively influence magnitude judgments in bilinguals and monolinguals, providing new insights into how number processing can be influenced by language(s).
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.
Full Text Available The article gives new evidence about the adverb as a part of the grammatical system of the Ukrainian steppe dialect spread in the area between the Danube and the Dniester rivers. The author proves that the grammatical system of the dialect spoken in the v. Shevchenkove, Kiliya district, Odessa region is determined by the historical development of the Ukrainian language rather than the influence of neighboring dialects.
Finance and Economic Planning, Cross River and Akwa ... See Table 1. Table 1: Indigenous Languages Spoken in Biase ... used in education, in business, in religion, in the media ... far back as the seventeenth (17th) century (King. 1844).
Caselli, Naomi K; Pyers, Jennie E
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Scarpaci, J L
This essay examines methodological problems concerning the conceptualization and operationalization of phenomena central to medical geography. Its main argument is that qualitative research can be strengthened if the differences between instrumental and apparent validity are better understood than the current research in medical geography suggests. Its premise is that our definitions of key terms and concepts must be reinforced throughout the design of research should our knowledge and understanding be enhanced. In doing so, the paper aims to move the methodological debate beyond the simple dichotomies of quantitative vs qualitative approaches and logical positivism vs phenomenology. Instead, the argument is couched in a postmodernist hermeneutic sense which questions the validity of one discourse of investigation over another. The paper begins by discussing methods used in conceptualizing and operationalizing variables in quantitative and qualitative research design. Examples derive from concepts central to a geography of health-care behavior and well-being. The latter half of the essay shows the uses and misuses of validity studies in selected health services research and the current debate on national health insurance.
The word ''radioactivity'' has something scary about it; it makes us think of something intangable, creeping dangers, the mysterious ticking of Geiger counters, reactor disasters, dirty bombs, nuclear contamination and destruction. True: Whole landscapes were made uninhabitable by accidents involving radioactive material such as Windscale, Sellafield and Chernobyl and others that were kept largely secret from the public. While to some they brought premature death, for the great majority of the world population their effects have so far been insignificant. By contrast, how little known is the fact that natural radioactivity has been around since human beginnings and that the cells of the human body have always been equipped to repair damage from radioactive radiation or other causes provided such damage does not occur too frequently. Elmar Traebert presents the physics underlying radioactivity without resorting to formulas and explains in an easily understandable manner the different types of radiation, their measurement and sources (in medicine, power plants, and weapons technology) and how they should be handled. He describes nuclear power plants and the safety problems they involve, sunburn, radiation therapy, uranium ammunition and uranium mining. Whoever knows about these things can more early cope with his own fears and maybe allay some of them. He can also see through statements made by different interest groups with regard to radioactive material and duly form his own opinion.
Corina, David P.; Lawyer, Laurel A.; Cates, Deborah
Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language. PMID:23293624
Moreno, Megan A; Ton, Adrienne; Selkie, Ellen; Evans, Yolanda
Nonsuicidal self-injury (NSSI) content is present on social media and may influence adolescents. Instagram is a popular site among adolescents in which NSSI-related terms are user-generated as hashtags (words preceded by a #). These hashtags may be ambiguous and thus challenging for those outside the NSSI community to understand. The purpose of this study was to evaluate the meaning, popularity, and content advisory warnings related to ambiguous NSSI hashtags on Instagram. This study used the search term "#selfharmmm" to identify public Instagram posts. Hashtag terms co-listed with #selfharmmm on each post were evaluated for inclusion criteria; selected hashtags were then assessed using a structured evaluation for meaning and consistency. We also investigated the total number of Instagram search hits for each hashtag at two time points and determined whether the hashtag prompted a Content Advisory warning. Our sample of 201 Instagram posts led to identification of 10 ambiguous NSSI hashtags. NSSI terms included #blithe, #cat, and #selfinjuryy. We discovered a popular image that described the broader community of NSSI and mental illness, called "#MySecretFamily." The term #MySecretFamily had approximately 900,000 search results at Time 1 and >1.5 million at Time 2. Only one-third of the relevant hashtags generated Content Advisory warnings. NSSI content is popular on Instagram and often veiled by ambiguous hashtags. Content Advisory warnings were not reliable; thus, parents and providers remain the cornerstone of prompting discussions about NSSI content on social media and providing resources for teens. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Moreno, Megan A.; Ton, Adrienne; Selkie, Ellen; Evans, Yolanda
Purpose Nonsuicidal self-injury (NSSI) content is present on social media and may influence adolescents. Instagram is a popular site among adolescents in which NSSI-related terms are user-generated as hashtags (words preceded by a #). These hashtags may be ambiguous and thus challenging for those outside the NSSI community to understand. The purpose of this study was to evaluate the meaning, popularity, and content advisory warnings related to ambiguous NSSI hashtags on Instagram. Methods This study used the search term “#selfharmmm” to identify public Instagram posts. Hashtag terms co-listed with #selfharmmm on each post were evaluated for inclusion criteria; selected hashtags were then assessed using a structured evaluation for meaning and consistency. We also investigated the total number of Instagram search hits for each hashtag at two time points and determined whether the hashtag prompted a Content Advisory warning. Results Our sample of 201 Instagram posts led to identification of 10 ambiguous NSSI hashtags. NSSI terms included #blithe, #cat, and #selfinjuryy. We discovered a popular image that described the broader community of NSSI and mental illness, called “#MySecretFamily.” The term #MySe-cretFamily had approximately 900,000 search results at Time 1 and >1.5 million at Time 2. Only one-third of the relevant hashtags generated Content Advisory warnings. Conclusions NSSI content is popular on Instagram and often veiled by ambiguous hashtags. Content Advisory warnings were not reliable; thus, parents and providers remain the cornerstone of prompting discussions about NSSI content on social media and providing resources for teens. PMID:26707231
Pizer, Ginger; Walters, Keith; Meier, Richard P
Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."
Lauren B. Collister
Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.
Johannessen, Janne Bondi; Salmons, Joseph C.; Westergaard, Marit; Anderssen, Merete; Arnbjörnsdóttir, Birna; Allen, Brent; Pierce, Marc; Boas, Hans C.; Roesch, Karen; Brown, Joshua R.; Putnam, Michael; Åfarli, Tor A.; Newman, Zelda Kahan; Annear, Lucas; Speth, Kristin
This book presents new empirical findings about Germanic heritage varieties spoken in North America: Dutch, German, Pennsylvania Dutch, Icelandic, Norwegian, Swedish, West Frisian and Yiddish, and varieties of English spoken both by heritage speakers and in communities after language shift. The volume focuses on three critical issues underlying the notion of ‘heritage language’: acquisition, attrition and change. The book offers theoretically-informed discussions of heritage language processe...
The study of language knowledge guided by a purely biological perspective prioritizes the study of syntax. The essential process of syntax is recursion--the ability to generate an infinite array of expressions from a limited set of elements. Researchers working within the biological perspective argue that this ability is possible only because of an innately specified genetic makeup that is specific to human beings. Such a view of language knowledge may be fully justified in discussions on biolinguistics, and in evolutionary biology. However, it is grossly inadequate in understanding language-learning problems, particularly those experienced by children with neurodevelopmental disorders such as developmental dyslexia, Williams syndrome, specific language impairment and autism spectrum disorders. Specifically, syntax-centered definitions of language knowledge completely ignore certain crucial aspects of language learning and use, namely, that language is embedded in a social context; that the role of envrironmental triggering as a learning mechanism is grossly underestimated; that a considerable extent of visuo-spatial information accompanies speech in day-to-day communication; that the developmental process itself lies at the heart of knowledge acquisition; and that there is a tremendous variation in the orthographic systems associated with different languages. All these (socio-cultural) factors can influence the rate and quality of spoken and written language acquisition resulting in much variation in phenotypes associated with disorders known to have a genetic component. Delineation of such phenotypic variability requires inputs from varied disciplines such as neurobiology, neuropsychology, linguistics and communication disorders. In this paper, I discuss published research that questions cognitive modularity and emphasises the role of the environment for understanding linguistic capabilities of children with neuro-developmental disorders. The discussion pertains
Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Laursen, Helle Pia
in various technological environments, we see an increase in scholarship that highlights the mixing and chaining of spoken, written and visual modalities and how written and visual often precede or overrule spoken language. There seems to be a mismatch between current day language practices......, in language education and in language practices. As a consequence of this and in the light of the increasing mobility and linguistic diversity in Europe, in this colloquium, we address the need for a (re)conceptualization of the relation between language and literacy. Drawing on data from different settings...
Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon
Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282
Kwon, Hyun Joo; Schallert, Diane L.
Ten adult readers, advanced in their control of two languages, Korean and English, were recruited for a study of academic literacy practices to examine the various linguistic repertoires on which they drew. Analysis of their language use revealed many instances of "translanguaging," that is, a flexible reliance on two languages to serve…
Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.
Mean Foong, Oi; Low, Tang Jung; La, Wai Wan
The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.
During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt’s monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in
Moeser, Shannon Dawn
College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)
There are conflicting claims among scholars on whether the structural outputs of the types of English spoken in countries where English is used as a second language gives such speech forms the status of varieties of English. This study examined those morphological features considered to be marked features of the variety spoken in Nigeria according to Kirkpatrick (2011) and the variety spoken in Malaysia by considering the claims of the Missing Surface Inflection Hypothesis (MSIH) a Second Lan...
Quando ele fica bravo, o português sai direitinho; fora disso a gente não entende nada: o contexto multilíngüe da surdez e o (reconhecimento das línguas no seu entorno When he's mad, his portuguese is ok; otherwise, we can't understand anything: the deaf multilingual context and the acknowlegment of its surrounding languages
Ivani Rodrigues Silva
Full Text Available O presente artigo tem por objetivo fazer uma reflexão sobre as línguas que habitam o entorno da criança surda filhas de pais ouvintes com o intuito de lançar luz à(s língua(s que nasce(m nesse contexto pela necessidade que mães ouvintes e crianças surdas têm de se fazer entender na ausência de uma língua convencional (seja o português da comunidade majoritária ou a língua de sinais que é utilizada pela comunidade surda adulta. A motivação deste texto vem do desconforto que sinto em relação à noção de língua que permeia a área da surdez, a qual não permite que sejam consideradas como legítimas as diferentes as línguas que circulam nesse espaço, como uma alternativa de linguagem. Tal noção está ancorada em uma visão de língua homogênea e idealmente concebida (Cesar e Cavalcanti, 2007 e na dicotomização dessas línguas em apenas língua oral e língua de sinais pode invalidar ou colocar em desvantagem outras linguagens que nascem nesse espaço pela própria necessidade que têm pais ouvintes de se comunicarem com seus filhos surdos.This article presents a reflection upon the languages that surround deaf children of hearing parents. Its aim is to shed light on the languages that are created in this context, because of the need hearing mothers and deaf children have of understanding each other in the absence of a conventional language (be it Portuguese, spoken by the majority of the community, or be it sign language, which is spoken by the deaf adult community. The motivation for this reflection comes from the discomfort I feel about the notion of language commonly used when discussing deafness. This notion is anchored in a definition of language as homogeneous and ideally conceived (Cesar e Cavalcanti, 2007. Such conceptions do not consider the different languages that exist in this context to be legitimate and therefore to be a language alternative. The classification of these languages exclusively into either
Van Heerden, CJ
Full Text Available and then adapting or training new models using the segmented spoken lectures. The eventual systems perform quite well, aligning more than 90% of a selected set of target words successfully....
... spoken French of IUFLs. Key words: IUFLs, Epenthensis, Ijebu dialect, Autosegmental phonology .... Ambiguities may result: salmi "strait" vs. salami. (An exception is that in .... tiers of segments. In the picture given us by classical generative.
Oct 11, 2009 ... In this article we will review the types of aphasia, an approach to its diagnosis, aphasia subtypes, rehabilitation and prognosis. ... language processing in both the written and spoken forms.6 ... The angular gyrus (Brodman area 39) is located at the .... of his or her quality of life, emotional state, sense of well-.
Full Text Available Hitherto, most research into cohesion has concentrated on texts (usually written only in standard Native Speaker English – e.g. Halliday and Hasan (1976. By contrast, following on the work in anaphora of such scholars as Reinhart (1983 and Cornish (1999, Christiansen (2011 describes cohesion as an interactive process focusing on the link between text cohesion and discourse coherence. Such a consideration of cohesion from the perspective of discourse (i.e. the process of which text is the product -- Widdowson 1984, p. 100 is especially relevant within a lingua franca context as the issue of different variations of ELF and inter-cultural concerns (Guido 2008 add extra dimensions to the complex multi-code interaction. In this case study, six extracts of transcripts (approximately 1000 words each, taken from the VOICE corpus (2011 of conference question and answer sessions (spoken interaction set in multicultural university contexts are analysed in depth by means of a qualitative method.
Cooper, Angela; Bradlow, Ann R.
Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...
Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C
The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.
Hsu, Hsinjen Julie; Bishop, Dorothy V M
Introduction. Many children with specific language impairment (SLI) have problems with language comprehension, and little is known about how to remediate these. We focused here on errors in interpreting sentences such as "the ball is above the cup", where the spatial configuration depends on word order. We asked whether comprehension of such short reversible sentences could be improved by computerized training, and whether learning by children with SLI resembled that of younger, typically-developing children. Methods. We trained 28 children with SLI aged 6-11 years, 28 typically-developing children aged from 4 to 7 years who were matched to the SLI group for raw scores on a test of receptive grammar, and 20 typically-developing children who were matched to the SLI group on chronological age. A further 20 children with SLI were given pre- and post-test assessments, but did not undergo training. Those in the trained groups were given training on four days using a computer game adopting an errorless learning procedure, during which they had to select pictures to correspond to spoken sentences such as "the cup is above the drum" or "the bird is below the hat". Half the trained children heard sentences using above/below and the other half heard sentences using before/after (with a spatial interpretation). A total of 96 sentences was presented over four sessions. Half the sentences were unique, whereas the remainder consisted of 12 repetitions of each of four sentences that became increasingly familiar as training proceeded. Results. Age-matched control children performed near ceiling (≥ 90% correct) in the first session and were excluded from the analysis. Around half the trained SLI children also performed this well. Training effects were examined in 15 SLI and 16 grammar-matched children who scored less than 90% correct on the initial training session. Overall, children's scores improved with training. Memory span was a significant predictor of improvement, even
Hsinjen Julie Hsu
Full Text Available Introduction. Many children with specific language impairment (SLI have problems with language comprehension, and little is known about how to remediate these. We focused here on errors in interpreting sentences such as “the ball is above the cup”, where the spatial configuration depends on word order. We asked whether comprehension of such short reversible sentences could be improved by computerized training, and whether learning by children with SLI resembled that of younger, typically-developing children.Methods. We trained 28 children with SLI aged 6–11 years, 28 typically-developing children aged from 4 to 7 years who were matched to the SLI group for raw scores on a test of receptive grammar, and 20 typically-developing children who were matched to the SLI group on chronological age. A further 20 children with SLI were given pre- and post-test assessments, but did not undergo training. Those in the trained groups were given training on four days using a computer game adopting an errorless learning procedure, during which they had to select pictures to correspond to spoken sentences such as “the cup is above the drum” or “the bird is below the hat”. Half the trained children heard sentences using above/below and the other half heard sentences using before/after (with a spatial interpretation. A total of 96 sentences was presented over four sessions. Half the sentences were unique, whereas the remainder consisted of 12 repetitions of each of four sentences that became increasingly familiar as training proceeded.Results. Age-matched control children performed near ceiling (≥ 90% correct in the first session and were excluded from the analysis. Around half the trained SLI children also performed this well. Training effects were examined in 15 SLI and 16 grammar-matched children who scored less than 90% correct on the initial training session. Overall, children’s scores improved with training. Memory span was a significant
Dunn, Michael; Kruspe, Nicole; Burenhult, Niclas
The Aslian language family, located in the Malay Peninsula and southern Thai Isthmus, consists of four distinct branches comprising some 18 languages. These languages predate the now dominant Malay and Thai. The speakers of Aslian languages exhibit some of the highest degree of phylogenetic and societal diversity present in Mainland Southeast Asia today, among them a foraging tradition particularly associated with locally ancient, Pleistocene genetic lineages. Little advance has been made in our understanding of the linguistic prehistory of this region or how such complexity arose. In this article we present a Bayesian phylogeographic analysis of a large sample of Aslian languages. An explicit geographic model of diffusion is combined with a cognate birth-word death model of lexical evolution to infer the location of the major events of Aslian cladogenesis. The resultant phylogenetic trees are calibrated against dates in the historical and archaeological record to infer a detailed picture of Aslian language history, addressing a number of outstanding questions, including (1) whether the root ancestor of Aslian was spoken in the Malay Peninsula, or whether the family had already divided before entry, and (2) the dynamics of the movement of Aslian languages across the peninsula, with a particular focus on its spread to the indigenous foragers. Copyright © 2013 Wayne State University Press, Detroit, Michigan 48201-1309.
Byram, Michael; Wagner, Manuela
Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…
Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…
Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy
Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…
Stern, Alissa Joy
For a language to survive, it must be spoken and passed down to the next generation. But how can we engage teenagers--so crucial for language transmission--to use and value their local tongue when they are bombarded by pressures from outside and from within their society to only speak national and international languages? This paper analyses the…
A proposal to transform Spanish into a universal language because it possesses the prerequisites: it is a living language, spoken in several countries; it is a natural language; and it uses the ordinary alphabet. Details on simplification and standardization are given. (Text is in Spanish.) (AMH)
Kelsey, Irving; Serrano, Jose
A rationale for teaching foreign languages in Venezuelan schools is discussed. An included sociolinguistic profile of Venezuela indicates that Spanish is the sole language of internal communication needs. Other languages spoken in Venezuela serve primarily a group function among the immigrant and indigenous communities. However, the teaching of…
Puskás, Tünde; Björk-Willén, Polly
This article explores dilemmatic aspects of language policies in a preschool group in which three languages (Swedish, Romani and Arabic) are spoken on an everyday basis. The article highlights the interplay between policy decisions on the societal level, the teachers' interpretations of these policies, as well as language practices on the micro…
Yuill, Nicola; Little, Sarah
Mother-child mental state talk (MST) supports children's developing social-emotional understanding. In typically developing (TD) children, family conversations about emotion, cognition, and causes have been linked to children's emotion understanding. Specific language impairment (SLI) may compromise developing emotion understanding and adjustment. We investigated emotion understanding in children with SLI and TD, in relation to mother-child conversation. Specifically, is cognitive, emotion, or causal MST more important for child emotion understanding and how might maternal scaffolding support this? Nine 5- to 9-year-old children with SLI and nine age-matched typically developing (TD) children, and their mothers. We assessed children's language, emotion understanding and reported behavioural adjustment. Mother-child conversations were coded for MST, including emotion, cognition, and causal talk, and for scaffolding of causal talk. Children with SLI scored lower than TD children on emotion understanding and adjustment. Mothers in each group provided similar amounts of cognitive, emotion, and causal talk, but SLI children used proportionally less cognitive and causal talk than TD children did, and more such child talk predicted better child emotion understanding. Child emotion talk did not differ between groups and did not predict emotion understanding. Both groups participated in maternal-scaffolded causal talk, but causal talk about emotion was more frequent in TD children, and such talk predicted higher emotion understanding. Cognitive and causal language scaffolded by mothers provides tools for articulating increasingly complex ideas about emotion, predicting children's emotion understanding. Our study provides a robust method for studying scaffolding processes for understanding causes of emotion. © 2017 The British Psychological Society.
Kouri, Theresa A.; Winn, Jennifer
Although most children seem to love music, our understanding of the role it plays in facilitating speech and language learning is limited, as is research validating its efficacy in the clinical setting. The purpose of this study was to examine how singing affects children's quick incidental learning (QUIL) of novel vocabulary terms. Sixteen…
Liu, David; Wellman, Henry M; Tardif, Twila; Sabbagh, Mark A
Theory of mind is claimed to develop universally among humans across cultures with vastly different folk psychologies. However, in the attempt to test and confirm a claim of universality, individual studies have been limited by small sample sizes, sample specificities, and an overwhelming focus on Anglo- European children. The current meta-analysis of children's false-belief performance provides the most comprehensive examination to date of theory-of-mind development in a population of non-Western children speaking non-Indo-European languages (i.e., Mandarin and Cantonese). The meta-analysis consisted of 196 Chinese conditions (127 from mainland China and 69 from Hong Kong), representing responses from more than 3,000 children, compared with 155 similar North American conditions (83 conditions from the United States and 72 conditions from Canada). The findings show parallel developmental trajectories of false-belief understanding for children in China and North America coupled with significant differences in the timing of development across communities-children's false-belief performance varied across different locales by as much as 2 or more years. These data support the importance of both universal trajectories and specific experiential factors in the development of theory of mind.
Janse, Esther; Jesse, Alexandra
Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.
Zamuner, Tania S; Moore, Charlotte; Desmeules-Trudel, Félix
To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing. Copyright © 2016 Elsevier Inc. All rights reserved.
Taumoepeau, Mele; Ruffman, Ted
This continuation of a previous study (Taumoepeau & Ruffman, 2006) examined the longitudinal relation between maternal mental state talk to 15- and 24-month-olds and their later mental state language and emotion understanding (N = 74). The previous study found that maternal talk about the child's desires to 15-month-old children uniquely predicted…
Lai, Chun; Gu, Mingyue; Hu, Jingjing
Legitimate teacher authority is fundamental to effective teaching, but is often a thorny issue that teachers need to grapple with when teaching in cross-cultural teaching contexts. By interviewing 18 pre-service Chinese language teachers on their understanding of legitimate teacher authority throughout teaching practicum at international schools…
Blom, E.; van Dijk, C.; Vasić, N.; van Witteloostuijn, M.; Avrutin, S.
The purpose of this study was to investigate texting and textese, which is the special register used for sending brief text messages, across children with typical development (TD) and children with Specific Language Impairment (SLI). Using elicitation techniques, texting and spoken language messages
Blom, W.B.T.; van Dijk, Chantal; Vasic, Nada; van Witteloostuijn, Merel; Avrutin, S.
The purpose of this study was to investigate texting and textese, which is the special register used for sending brief text messages, across children with typical development (TD) and children with Specific Language Impairment (SLI). Using elicitation techniques, texting and spoken language messages
The present paper based on extensive fieldwork D conducted on Kalasha, an endangered language spoken in the three small valleys in Chitral District of Northwestern Pakistan, exposes a spontaneous dialogue-based elicitation of linguistic material used for the description and documentation of the language. After a brief display of the basic typology…
Cruz Rondón, Elio Jesús; Velasco Vera, Leidy Fernanda
Learning a foreign language may be a challenge for most people due to differences in the form and structure between one's mother tongue and a new one. However, there are some tools that facilitate the teaching and learning of a foreign language, for instance, new applications for digital devices, video blogs, educational platforms, and teaching…
Shepherd, Debra Lynne
The regional and cultural closeness of Botswana and South Africa, as well as differences in their political histories and language policy stances, offers a unique opportunity to evaluate the role of language in reading outcomes. This study aims to empirically test the effect of exposure to mother tongue and English instruction on the reading…
The increasing influence of sociocultural theories of learning on assessment practices in second language education necessitates an expansion of the knowledge base that teacher-assessors need to develop (what teachers need to know) and related changes in the processes of language teacher education (how they learn and develop it). Teacher assessors…
Mazlack, L.J.; Paz, N.M.
Newspaper cartoons can graphically display the result of ambiguity in human speech; the result can be unexpected and funny. Likewise, computer analysis of natural language statements also needs to successfully resolve ambiguous situations. Computer techniques already developed use restricted world knowledge in resolving ambiguous language use. This paper illustrates how these techniques can be used in resolving ambiguous situations arising in cartoons. 8 references.
Rienties, Bart; Lewis, Tim; McFarlane, Ruth; Nguyen, Quan; Toetenel, Lisette
Language education has a rich history of research and scholarship focusing on the effectiveness of learning activities and the impact these have on student behaviour and outcomes. One of the basic assumptions in foreign language pedagogy and CALL in particular is that learners want to be able to communicate effectively with native speakers of…
Hunter, Cynthia R; Pisoni, David B
Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low
This dissertation is a descriptive grammar of Kove, an Austronesian language spoken in the West New Britain Province of Papua New Guinea. Kove is primarily spoken in 18 villages, including some on the small islands north of New Britain. There are about 9,000 people living in the area, but many are not fluent speakers of Kove. The dissertation…
Johnston, Trevor; van Roekel, Jane; Schembri, Adam
This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.
Full Text Available The article deals with the study and analysis of comparable existing intensive methods of teaching foreign languages. This work is carried out to identify the positive and negative aspects of intensive methods of teaching foreign languages. The author traces the idea of rational organization and intensification of teaching foreign languages from their inception to the moment of their preparation in an integrated system. advantages and disadvantages of the most popular methods of intensive training also analyzed the characteristic of different historical periods, namely cugestopedichny method G. Lozanov method activation of reserve possibilities of students G. Kitaygorodskoy, emotional-semantic method I. Schechter, an intensive course of learning a foreign language L. Gegechkori , sugestokibernetichny integral method of accelerated learning a foreign language B. Petrusinskogo, a crash course in the study of spoken language by immersion A. Plesnevich. Analyzed the principles of learning and the role of each method in the development of methods of intensive foreign language training. The author identified a number of advantages and disadvantages of intensive methods of teaching foreign languages: 1 the assimilation of a large number of linguistic, lexical and grammatical units; 2 active use of acquired knowledge, skills and abilities in the practice of oral speech communication in a foreign language; 3 the ability to use language material resulting not only in his speech, but also in understanding the interlocutor; 4 overcoming psychological barriers, including fear of the possibility of making a mistake; 5 high efficiency and fast learning; 6 too much new language material that is presented; 7 training of oral forms of communication; 8 decline of grammatical units and models.
Hung, Pei-Fang; Nippold, Marilyn A
Idioms are figurative expressions such as hold your horses, kick the bucket, and lend me a hand, which commonly occur in everyday spoken and written language. Hence, the understanding of these expressions is essential for daily communication. In this study, we examined idiom understanding in healthy adults in their 20s, 40s, 60s and 80s (n=30 per group) to determine if performance would show an age-related decline. Participants judged their own familiarity with a set of 20 idioms, explained the meaning of each, described a situation in which the idiom could be used, and selected the appropriate interpretation from a set of choices. There was no evidence of an age-related decline on any tasks. Rather, the 60s group reported greater familiarity and offered better explanations than did the 20s group. Moreover, greater familiarity with idioms was associated with better understanding in adults.
Seyyed Hatam Tamimi Sa’d
Full Text Available The present qualitative study sought to explore the relationship between English language learning and identity reconstruction from the view - points of Iranian language learners. The data were collected by means of focus-group interviews with forty-five male intermediate learners of English as a foreign language (EFL. To define the concept of identity, the participants were found to draw upon notions as diverse as personal and social characteristics, ethnic origins, geographical locations, religious affiliations, national customs and rituals and values, amongst others. Furthermore, the vast majority of the learners held that learning English had a profound impact on how they perceive their identity. Of these, nearly all the interviewees regarded the above impact as highly positive and beneficial to the course of language learning. The interviewees also expressed strong inclination to integrate and, therefore, to identify with the target linguistic and cultural norms. Notwithstanding, a number of opposing voices were raised by some learners who resisted identity reconstruction through language learning, claiming that they learned English simply for the sake of instrumental, as opposed to integrative, purposes. These participants also levelled criticisms at what they viewed as ‘the imposition of Western values on an Islamic country’. The results highlight the vital role of motivation and the status of English as an international language in viewing, redefining and reconstructing identity. In conclusion, the findings confirm the role of discursive practices, power relations, solidarity and otherising with regard to identity reconstruction in the course of second language (L2 learning.
Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas
Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.
Burgoyne, K; Kelly, J M; Whiteley, H E; Spooner, A
Data from national test results suggests that children who are learning English as an additional language (EAL) experience relatively lower levels of educational attainment in comparison to their monolingual, English-speaking peers. The relative underachievement of children who are learning EAL demands that the literacy needs of this group are identified. To this end, this study aimed to explore the reading- and comprehension-related skills of a group of EAL learners. Data are reported from 92 Year 3 pupils, of whom 46 children are learning EAL. Children completed standardized measures of reading accuracy and comprehension, listening comprehension, and receptive and expressive vocabulary. Results indicate that many EAL learners experience difficulties in understanding written and spoken text. These comprehension difficulties are not related to decoding problems but are related to significantly lower levels of vocabulary knowledge experienced by this group. Many EAL learners experience significantly lower levels of English vocabulary knowledge which has a significant impact on their ability to understand written and spoken text. Greater emphasis on language development is therefore needed in the school curriculum to attempt to address the limited language skills of children learning EAL.
Posel, Dorrit; Zeller, Jochen
In the post-apartheid era, South Africa has adopted a language policy that gives official status to 11 languages (English, Afrikaans, and nine Bantu languages). However, English has remained the dominant language of business, public office, and education, and some research suggests that English is increasingly being spoken in domestic settings.…
Baïdak, Nathalie; Balcon, Marie-Pascale; Motiejunaite, Akvile
Linguistic diversity is part of Europe's DNA. It embraces not only the official languages of Member States, but also the regional and/or minority languages spoken for centuries on European territory, as well as the languages brought by the various waves of migrants. The coexistence of this variety of languages constitutes an asset, but it is also…
Lagos, Cristián; Espinoza, Marco; Rojas, Darío
In this paper, we analyse the cultural models (or folk theory of language) that the Mapuche intellectual elite have about Mapudungun, the native language of the Mapuche people still spoken today in Chile as the major minority language. Our theoretical frame is folk linguistics and studies of language ideology, but we have also taken an applied…
Gelman, Rochel; Gallistel, C R
Reports of research with the Pirahã and Mundurukú Amazonian Indians of Brazil lend themselves to discussions of the role of language in the origin of numerical concepts. The research findings indicate that, whether or not humans have an extensive counting list, they share with nonverbal animals a language-independent representation of number, with limited, scale-invariant precision. What causal role, then, does knowledge of the language of counting serve? We consider the strong Whorfian proposal, that of linguistic determinism; the weak Whorfian hypothesis, that language influences how we think; and that the "language of thought" maps to spoken language or symbol systems.
Corina, David P; Knapp, Heather Patterson
In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.
Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R
Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.
Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S
Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.
This study examines the motivational development of Japanese language learners in Australia and South Korea and their future self-images as bilingual or multilingual individuals. Initial motivation to study Japanese was generally linked to an interest in Japanese language and culture. However, visions of possible future careers became a more significant motivational factor as the students progressed in their studies. The study explores the impact of the students’ multilingual competencies, ...
DAS , BISHNU PADA; Young , R I M; Case , K; Rahimifard , S; Anumba , C; Bouchlaghem , N; Cutting Decelle , Anne-Francoise
Abstract Many manufacturing organisations while doing business either directly or indirectly with other industrial sectors often encounter interoperability problems amongst software systems. This increases the business cost and reduces the efficiency. Research communities are exploring ways to reduce this cost. Incompatibility amongst the syntaxes and the semantics of the languages of application systems is the most common cause to this problem. The Process Specification Language (...
Capitelli, Sarah; Hooper, Paula; Rankin, Lynn; Austin, Marilyn; Caven, Gennifer
This qualitative case study looks closely at an elementary teacher who participated in professional development experiences that helped her develop a hybrid practice of using inquiry-based science to teach both science content and English language development (ELD) to her students, many of whom are English language learners (ELLs). This case study examines the teacher's reflections on her teaching and her students' learning as she engaged her students in science learning and supported their developing language skills. It explicates the professional learning experiences that supported the development of this hybrid practice. Closely examining the pedagogical practice and reflections of a teacher who is developing an inquiry-based approach to both science learning and language development can provide insights into how teachers come to integrate their professional development experiences with their classroom expertise in order to create a hybrid inquiry-based science ELD practice. This qualitative case study contributes to the emerging scholarship on the development of teacher practice of inquiry-based science instruction as a vehicle for both science instruction and ELD for ELLs. This study demonstrates how an effective teaching practice that supports both the science and language learning of students can develop from ongoing professional learning experiences that are grounded in current perspectives about language development and that immerse teachers in an inquiry-based approach to learning and instruction. Additionally, this case study also underscores the important role that professional learning opportunities can play in supporting teachers in developing a deeper understanding of the affordances that inquiry-based science can provide for language development.
Fernando Centenera Sánchez-Seco
Full Text Available The subject of this article is the language of human law in the thought of Francisco Suárez. Its chief focus is on the Treatise on Laws and on God the Lawgiver and its views on the prescriptive nature of legislative language, written and spoken language, the lexical-semantic level, and linguistic clarity from the viewpoints of convenience, the essence of the law and justice. The issues Suárez deals with in relation to these points have continued to attract attention up to the present day, and a reading of the Treatise confirms the impression that some of them are still valid. Accordingly, as well as setting out, describing and offering a guide to understanding Suárez ideas, the article offers a comparative and contemplative analysis of them, without forgetting that their author belonged to the early modern period.
The author examines the theory and research relevant to educating d/Deaf and Hard of Hearing Multilingual Learners (DMLs). There is minimal research on this population, yet a synthesis of related theory, research, and practice on spoken-language bilinguals can be used to add to the body of knowledge on these learners. Specifically, the author reports on three major areas: (a) population characteristics of DMLs, (b) theories relevant to understanding the language development of DMLs, and (c) considerations for programs in designing and implementing educational services for DMLs. In the interest of ensuring that children receive the foundation for linguistic success, aspects of linguistically responsive teaching (Lucas & Villegas, 2013) are addressed, with a focus on adopting an asset-based perspective on educating DMLs that honors all of a child's language, identity, and cultural memberships.
Payne, Brennan R.; Gross, Alden L.; Parisi, Jeanine M.; Sisco, Shannon M.; Stine-Morrow, Elizabeth A. L.; Marsiske, Michael; Rebok, George W.
Episodic memory shows substantial declines with advancing age, but research on longitudinal trajectories of spoken discourse memory (SDM) in older adulthood is limited. Using parallel process latent growth curve models, we examined 10 years of longitudinal data from the no-contact control group (N = 698) of the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) randomized controlled trial in order to test (a) the degree to which SDM declines with advancing age, (b) predictors of these age-related declines, and (c) the within-person relationship between longitudinal changes in SDM and longitudinal changes in fluid reasoning and verbal ability over 10 years, independent of age. Individuals who were younger, White, had more years of formal education, were male, and had better global cognitive function and episodic memory performance at baseline demonstrated greater levels of SDM on average. However, only age at baseline uniquely predicted longitudinal changes in SDM, such that declines accelerated with greater age. Independent of age, within-person decline in reasoning ability over the 10-year study period was substantially correlated with decline in SDM (r = .87). An analogous association with SDM did not hold for verbal ability. The findings suggest that longitudinal declines in fluid cognition are associated with reduced spoken language comprehension. Unlike findings from memory for written prose, preserved verbal ability may not protect against developmental declines in memory for speech. PMID:24304364
Bent, Tessa; Holt, Rachael Frush
In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.
Tiewtrakul, T; Fletcher, S R
Although English has been the international aviation language since 1951, formal language proficiency testing for key aviation personnel has only recently been implemented by the International Civil Aviation Organization (ICAO). It aims to ensure minimum acceptable levels of English pronunciation and comprehension universally, but does not attend to particular regional dialect difficulties. However, evidence suggests that voice transmissions between air traffic controllers and pilots are a particular problem in international airspace and that pilots may not understand messages due to the influence of different accents when using English. This study explores the potential impact of 'non-native English' in pilot-air traffic control transmissions using a 'conversation analysis' technique to examine approach phase recordings from Bangkok International Airport. Results support that communication errors, defined by incidents of pilots not understanding, occur significantly more often when speakers are both non-native English, messages are more complex and when numerical information is involved. These results and their possible implications are discussed with reference to the development of ICAO's new language proficiency standards. Statement of Relevance: This study builds on previous work and literature, providing further evidence to show that the risks caused by language and linguistics in aviation must be explored more deeply. Findings are particularly contemporary and relevant today, indicating that recently implemented international standards would benefit from further exploratory research and development.
Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.
Bonin, Patrick; Chalard, Marylène; Méot, Alain; Fayol, Michel
The influence of nine variables on the latencies to write down or to speak aloud the names of pictures taken from Snodgrass and Vanderwart (1980) was investigated in French adults. The major determinants of both written and spoken picture naming latencies were image variability, image agreement and age of acquisition. To a lesser extent, name agreement was also found to have an impact in both production modes. The implications of the findings for theoretical views of both spoken and written picture naming are discussed.
Langus, Alan; Mehler, Jacques; Nespor, Marina
Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.
Boyle, Michael P
The purpose of this study was to investigate whether attribution theory could explain speech-language pathologists (SLPs) perceptions of children with communication disorders such as stuttering. Specifically, it was determined whether perceptions of onset and offset controllability, as well as biological and non-biological attributions for communication disorders were related to willingness to help, sympathy, and anger toward children with these disorders. It was also of interest to determine if blame for stuttering was related to perceived controllability of stuttering and negative attitudes toward people who stutter (PWS). A survey was developed to measure perceived onset and offset controllability, biological and non-biological attributions, willingness to help, sympathy, and anger toward middle school children with developmental stuttering, functional articulation disorders, and cerebral palsy. In addition, a scale was developed to measure blame and negative attitudes toward PWS in general. Surveys were mailed to 1000 school-based SLPs. Data from 330 participants were analyzed. Supporting the hypotheses of attribution theory, higher perceived onset and offset controllability of the disorder was linked to less willingness to help, lower sympathy, and more anger across conditions. Increased biological attributions were associated with more reported sympathy. Increased blame for stuttering was linked to higher perceived controllability of stuttering, more dislike of PWS, and more agreement with negative stereotypes about PWS. Educating SLPs about the variable loss of control inherent in stuttering could improve attitudes and increase understanding of PWS. Reductions in blame may facilitate feelings of sympathy and empathy for PWS and reduce environmental barriers for clients. Learning outcomes Readers should be able to: (1) identify the main principles of Weiner's attribution theory (2) identify common negative perceptions of people who stutter (3) describe how
MacSweeney, Mairéad; Woll, Bencie; Campbell, Ruth; Calvert, Gemma A; McGuire, Philip K; David, Anthony S; Simmons, Andrew; Brammer, Michael J
In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopographic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.
Ganjavi, Shadi; Georgiou, Panayiotis G; Narayanan, Shrikanth
... (The DARPA Babylon Program; Narayanan, 2003). In this paper, we discuss transcription systems needed for automated spoken language processing applications in Persian that uses the Arabic script for writing...
Dediu, Dan; Levinson, Stephen C.
It is usually assumed that modern language is a recent phenomenon, coinciding with the emergence of modern humans themselves. Many assume as well that this is the result of a single, sudden mutation giving rise to the full “modern package.” However, we argue here that recognizably modern language is likely an ancient feature of our genus pre-dating at least the common ancestor of modern humans and Neandertals about half a million years ago. To this end, we adduce a broad range of evidence from linguistics, genetics, paleontology, and archaeology clearly suggesting that Neandertals shared with us something like modern speech and language. This reassessment of the antiquity of modern language, from the usually quoted 50,000–100,000 years to half a million years, has profound consequences for our understanding of our own evolution in general and especially for the sciences of speech and language. As such, it argues against a saltationist scenario for the evolution of language, and toward a gradual process of culture-gene co-evolution extending to the present day. Another consequence is that the present-day linguistic diversity might better reflect the properties of the design space for language and not just the vagaries of history, and could also contain traces of the languages spoken by other human forms such as the Neandertals. PMID:23847571
Full Text Available It is usually assumed that modern language is a recent phenomenon, coinciding with the emergence of modern humans themselves. Many assume as well that this is the result of a single, sudden mutation giving rise to the full modern package. However, we argue here that recognizably modern language is likely an ancient feature of our genus pre-dating at least the common ancestor of modern humans and Neandertals about half a million years ago. To this end, we adduce a broad range of evidence from linguistics, genetics, palaeontology and archaeology clearly suggesting that Neandertals shared with us something like modern speech and language. This reassessment of the antiquity of modern language, from the usually quoted 50,000-100,000 years to half a million years, has profound consequences for our understanding of our own evolution in general and especially for the sciences of speech and language. As such, it argues against a saltationist scenario for the evolution of language, and towards a gradual process of culture-gene co-evolution extending to the present day. Another consequence is that the present-day linguistic diversity might better reflect the properties of the design space for language and not just the vagaries of history, and could also contain traces of the languages spoken by other human forms such as the Neandertals.
The assessment of pragmatics expressed in spoken language is a central issue in the evaluation of children with communication impairments and related disorders. A developmental approach to assessment has remained problematic due to the complex interaction of social, linguistic, cognitive and cultural influences on pragmatics. A selective review and critique of current formal and informal testing methods and pragmatic analytic procedures. Formal testing of pragmatics has limited potential to reveal the typical pragmatic abnormalities in interaction but has a significant role to play in the assessment of comprehension of pragmatic intent. Clinical assessment of pragmatics with the pre-school child should focus on elicitation of communicative intent via naturalistic methods as part of an overall assessment of social communication skills. Assessments for older children should include a comprehensive investigation of speech acts, conversational and narrative abilities, the understanding of implicature and intent as well as the child's ability to employ contextual cues to understanding. Practical recommendations are made regarding the choice of a core set of pragmatic assessments and elicitation techniques. The practitioner's attention is drawn to the lack of the usual safeguards of reliability and validity that have persisted in some language pragmatics assessments. A core set of pragmatic assessment tools can be identified from the proliferation of instruments in current use. Further research is required to establish clearer norms and ranges in the development of pragmatic ability, particularly with respect to the understanding of inference, topic management and coherence.
In multilingual work-place settings, there are many ways of addressing (or not addressing) the issue of understanding, and different ways of handling when the issue is explicitly raised in the form of a question. Building on a previous study by Tranekjær (Tranekjær, 2015; Tranekjær og Kappa, 2016......; 2014) the paper will explore the possibility of outlining differences in the efficiency of SL learner strategies for addressing inquiries about understanding. The paper in this way provides valuable input to language teachers and trainers within the field of diversity management and intercultural...