WorldWideScience

Sample records for spoken language ability

  1. The relation of the number of languages spoken to performance in different cognitive abilities in old age.

    Science.gov (United States)

    Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias

    2016-12-01

    Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.

  2. Cochlear implants and spoken language processing abilities: Review and assessment of the literature

    OpenAIRE

    Peterson, Nathaniel R.; Pisoni, David B.; Miyamoto, Richard T.

    2010-01-01

    Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading...

  3. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  4. Cochlear implants and spoken language processing abilities: review and assessment of the literature.

    Science.gov (United States)

    Peterson, Nathaniel R; Pisoni, David B; Miyamoto, Richard T

    2010-01-01

    Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading). However, there is wide variation in individual outcomes following cochlear implantation, and some CI recipients never develop useable speech and oral language skills. The causes of this enormous variation in outcomes are only partly understood at the present time. The variables most strongly associated with language outcomes are age at implantation and mode of communication in rehabilitation. Thus, some of the more important factors determining success of cochlear implantation are broadly related to neural plasticity that appears to be transiently present in deaf individuals. In this article we review the expected outcomes of cochlear implantation, potential predictors of those outcomes, the basic science regarding critical and sensitive periods, and several new research directions in the field of cochlear implantation.

  5. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Factors Influencing Verbal Intelligence and Spoken Language in Children with Phenylketonuria.

    Science.gov (United States)

    Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre

    2015-05-01

    To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.

  7. CROATIAN ADULT SPOKEN LANGUAGE CORPUS (HrAL

    Directory of Open Access Journals (Sweden)

    Jelena Kuvač Kraljević

    2016-01-01

    Full Text Available Interest in spoken-language corpora has increased over the past two decades leading to the development of new corpora and the discovery of new facets of spoken language. These types of corpora represent the most comprehensive data source about the language of ordinary speakers. Such corpora are based on spontaneous, unscripted speech defined by a variety of styles, registers and dialects. The aim of this paper is to present the Croatian Adult Spoken Language Corpus (HrAL, its structure and its possible applications in different linguistic subfields. HrAL was built by sampling spontaneous conversations among 617 speakers from all Croatian counties, and it comprises more than 250,000 tokens and more than 100,000 types. Data were collected during three time slots: from 2010 to 2012, from 2014 to 2015 and during 2016. HrAL is today available within TalkBank, a large database of spoken-language corpora covering different languages (https://talkbank.org, in the Conversational Analyses corpora within the subsection titled Conversational Banks. Data were transcribed, coded and segmented using the transcription format Codes for Human Analysis of Transcripts (CHAT and the Computerised Language Analysis (CLAN suite of programmes within the TalkBank toolkit. Speech streams were segmented into communication units (C-units based on syntactic criteria. Most transcripts were linked to their source audios. The TalkBank is public free, i.e. all data stored in it can be shared by the wider community in accordance with the basic rules of the TalkBank. HrAL provides information about spoken grammar and lexicon, discourse skills, error production and productivity in general. It may be useful for sociolinguistic research and studies of synchronic language changes in Croatian.

  8. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  10. Spoken language outcomes after hemispherectomy: factoring in etiology.

    Science.gov (United States)

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  11. Spoken language development in oral preschool children with permanent childhood deafness.

    Science.gov (United States)

    Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.

  12. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  13. Spoken grammar awareness raising: Does it affect the listening ability of Iranian EFL learners?

    Directory of Open Access Journals (Sweden)

    Mojgan Rashtchi

    2011-12-01

    Full Text Available Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and lack core spoken language features. The aim of the present study was to explore the question whether awareness of spoken grammar features could affect learners’ comprehension of real-life conversations. To this end, 45 university students in two intact classes participated in a listening course employing corpus-based materials. The instruction of the spoken grammar features to the experimental group was done overtly through awareness raising tasks, whereas the control group, though exposed to the same materials, was not provided with such tasks for learning the features. The results of the independent samples t tests revealed that the learners in the experimental group comprehended everyday conversations much better than those in the control group. Additionally, the highly positive views of spoken grammar held by the learners, which was elicited by means of a retrospective questionnaire, were generally comparable to those reported in the literature.

  14. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    Science.gov (United States)

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  15. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  16. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  17. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    Science.gov (United States)

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  18. Analytic study of the Tadoma method: language abilities of three deaf-blind subjects.

    Science.gov (United States)

    Chomsky, C

    1986-09-01

    This study reports on the linguistic abilities of 3 adult deaf-blind subjects. The subjects perceive spoken language through touch, placing a hand on the face of the speaker and monitoring the speaker's articulatory motions, a method of speechreading known as Tadoma. Two of the subjects, deaf-blind since infancy, acquired language and learned to speak through this tactile system; the third subject has used Tadoma since becoming deaf-blind at age 7. Linguistic knowledge and productive language are analyzed, using standardized tests and several tests constructed for this study. The subjects' language abilities prove to be extensive, comparing favorably in many areas with hearing individuals. The results illustrate a relatively minor effect of limited language exposure on eventual language achievement. The results also demonstrate the adequacy of the tactile sense, in these highly trained Tadoma users, for transmitting information about spoken language sufficient to support the development of language and learning to produce speech.

  19. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  20. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  1. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... languages and can be used for the purposes of spoken language identification. Keywords. SLID .... branch of linguistics to study the sound structure of human language. ... countries, work in the area of Indian language identification has not ...... English and speech database has been collected over tele-.

  2. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  3. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    Science.gov (United States)

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  4. Predicting reading ability in teenagers who are deaf or hard of hearing: A longitudinal analysis of language and reading.

    Science.gov (United States)

    Worsfold, Sarah; Mahon, Merle; Pimperton, Hannah; Stevenson, Jim; Kennedy, Colin

    2018-04-13

    Deaf and hard of hearing (D/HH) children and young people are known to show group-level deficits in spoken language and reading abilities relative to their hearing peers. However, there is little evidence on the longitudinal predictive relationships between language and reading in this population. To determine the extent to which differences in spoken language ability in childhood predict reading ability in D/HH adolescents. and procedures: Participants were drawn from a population-based cohort study and comprised 53 D/HH teenagers, who used spoken language, and a comparison group of 38 normally hearing teenagers. All had completed standardised measures of spoken language (expression and comprehension) and reading (accuracy and comprehension) at 6-10 and 13-19 years of age. and results: Forced entry stepwise regression showed that, after taking reading ability at age 8 years into account, language scores at age 8 years did not add significantly to the prediction of Reading Accuracy z-scores at age 17 years (change in R 2  = 0.01, p = .459) but did make a significant contribution to the prediction of Reading Comprehension z-scores at age 17 years (change in R 2  = 0.17, p skills in middle childhood predict reading comprehension ability in adolescence. Continued intervention to support language development beyond primary school has the potential to benefit reading comprehension and hence educational access for D/HH adolescents. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. What Comes First, What Comes Next: Information Packaging in Written and Spoken Language

    Directory of Open Access Journals (Sweden)

    Vladislav Smolka

    2017-07-01

    Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.

  6. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  7. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  8. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  9. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    Science.gov (United States)

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  11. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  12. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    Science.gov (United States)

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  13. Retinoic acid signaling: a new piece in the spoken language puzzle

    Directory of Open Access Journals (Sweden)

    Jon-Ruben eVan Rhijn

    2015-11-01

    Full Text Available Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms that encode these pathways will shed light on how humans can effortlessly and innately use spoken language and could elucidate what goes wrong in speech-language disorders.FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that also includes receptive and expressive language impairments. The underlying neuro-molecular mechanisms controlled by FOXP2, which will give insight into our capacity for speech-motor control, are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid signaling and to modify the cellular response to retinoic acid, a key regulator of brain development. Herein we explore the evidence that FOXP2 and retinoic acid signaling function in the same pathways. We present evidence at molecular, cellular and behavioral levels that suggest an interplay between FOXP2 and retinoic acid that may be important for fine motor control and speech-motor output. We propose that retinoic acid signaling is an exciting new angle from which to investigate how neurogenetic mechanisms can contribute to the (spoken language ready brain.

  14. Acquisition of graphic communication by a young girl without comprehension of spoken language.

    Science.gov (United States)

    von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R

    To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.

  15. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  16. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  17. A real-time spoken-language system for interactive problem-solving, combining linguistic and statistical technology for improved spoken language understanding

    Science.gov (United States)

    Moore, Robert C.; Cohen, Michael H.

    1993-09-01

    Under this effort, SRI has developed spoken-language technology for interactive problem solving, featuring real-time performance for up to several thousand word vocabularies, high semantic accuracy, habitability within the domain, and robustness to many sources of variability. Although the technology is suitable for many applications, efforts to date have focused on developing an Air Travel Information System (ATIS) prototype application. SRI's ATIS system has been evaluated in four ARPA benchmark evaluations, and has consistently been at or near the top in performance. These achievements are the result of SRI's technical progress in speech recognition, natural-language processing, and speech and natural-language integration.

  18. SPOKEN-LANGUAGE FEATURES IN CASUAL CONVERSATION A Case of EFL Learners‘ Casual Conversation

    Directory of Open Access Journals (Sweden)

    Aris Novi

    2017-12-01

    Full Text Available Spoken text differs from written one in its features of context dependency, turn-taking organization, and dynamic structure. EFL learners; however, sometime find it difficult to produce typical characteristics of spoken language, particularly in casual talk. When they are asked to conduct a conversation, some of them tend to be script-based which is considered unnatural. Using the theory of Thornburry (2005, this paper aims to analyze characteristics of spoken language in casual conversation which cover spontaneity, interactivity, interpersonality, and coherence. This study used discourse analysis to reveal four features in turns and moves of three casual conversations. The findings indicate that not all sub-features used in the conversation. In this case, the spontaneity features were used 132 times; the interactivity features were used 1081 times; the interpersonality features were used 257 times; while the coherence features (negotiation features were used 526 times. Besides, the results also present that some participants seem to dominantly produce some sub-features naturally and vice versa. Therefore, this finding is expected to be beneficial to provide a model of how spoken interaction should be carried out. More importantly, it could raise English teachers or lecturers‘ awareness in teaching features of spoken language, so that, the students could develop their communicative competence as the native speakers of English do.

  19. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  20. The employment of a spoken language computer applied to an air traffic control task.

    Science.gov (United States)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  1. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.

    2016-01-01

    BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  2. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2016-01-01

    Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  3. Iconicity as a general property of language: evidence from spoken and signed languages

    Directory of Open Access Journals (Sweden)

    Pamela Perniss

    2010-12-01

    Full Text Available Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to hook up to motor and perceptual experience.

  4. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  5. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    Science.gov (United States)

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  6. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    Science.gov (United States)

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  7. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  9. On-Line Syntax: Thoughts on the Temporality of Spoken Language

    Science.gov (United States)

    Auer, Peter

    2009-01-01

    One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…

  10. Inferring Speaker Affect in Spoken Natural Language Communication

    OpenAIRE

    Pon-Barry, Heather Roberta

    2012-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, ...

  11. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    Science.gov (United States)

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Rethinking spoken fluency

    OpenAIRE

    McCarthy, Michael

    2009-01-01

    This article re-examines the notion of spoken fluency. Fluent and fluency are terms commonly used in everyday, lay language, and fluency, or lack of it, has social consequences. The article reviews the main approaches to understanding and measuring spoken fluency and suggest that spoken fluency is best understood as an interactive achievement, and offers the metaphor of ‘confluence’ to replace the term fluency. Many measures of spoken fluency are internal and monologue-based, whereas evidence...

  13. THE IMPLEMENTATION OF COMMUNICATIVE LANGUAGE TEACHING (CLT TO TEACH SPOKEN RECOUNTS IN SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Eri Rusnawati

    2016-10-01

    Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa

  14. Effects of early auditory experience on the spoken language of deaf children at 3 years of age.

    Science.gov (United States)

    Nicholas, Johanna Grant; Geers, Ann E

    2006-06-01

    By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44

  15. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    Science.gov (United States)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  16. Development of Mandarin spoken language after pediatric cochlear implantation.

    Science.gov (United States)

    Li, Bei; Soli, Sigfrid D; Zheng, Yun; Li, Gang; Meng, Zhaoli

    2014-07-01

    The purpose of this study was to evaluate early spoken language development in young Mandarin-speaking children during the first 24 months after cochlear implantation, as measured by receptive and expressive vocabulary growth rates. Growth rates were compared with those of normally hearing children and with growth rates for English-speaking children with cochlear implants. Receptive and expressive vocabularies were measured with the simplified short form (SSF) version of the Mandarin Communicative Development Inventory (MCDI) in a sample of 112 pediatric implant recipients at baseline, 3, 6, 12, and 24 months after implantation. Implant ages ranged from 1 to 5 years. Scores were expressed in terms of normal equivalent ages, allowing normalized vocabulary growth rates to be determined. Scores for English-speaking children were re-expressed in these terms, allowing direct comparisons of Mandarin and English early spoken language development. Vocabulary growth rates during the first 12 months after implantation were similar to those for normally hearing children less than 16 months of age. Comparisons with growth rates for normally hearing children 16-30 months of age showed that the youngest implant age group (1-2 years) had an average growth rate of 0.68 that of normally hearing children; while the middle implant age group (2-3 years) had an average growth rate of 0.65; and the oldest implant age group (>3 years) had an average growth rate of 0.56, significantly less than the other two rates. Growth rates for English-speaking children with cochlear implants were 0.68 in the youngest group, 0.54 in the middle group, and 0.57 in the oldest group. Growth rates in the middle implant age groups for the two languages differed significantly. The SSF version of the MCDI is suitable for assessment of Mandarin language development during the first 24 months after cochlear implantation. Effects of implant age and duration of implantation can be compared directly across

  17. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  18. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  19. Is spoken Danish less intelligible than Swedish?

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent J.; van Bezooijen, Renee; Pacilly, Jos J. A.

    2010-01-01

    The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss circumstantial evidence suggesting that Danish is

  20. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  1. Native Language Spoken as a Risk Marker for Tooth Decay.

    Science.gov (United States)

    Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M

    2015-01-01

    The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.

  2. The missing foundation in teacher education: Knowledge of the structure of spoken and written language.

    Science.gov (United States)

    Moats, L C

    1994-01-01

    Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.

  3. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    Science.gov (United States)

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  4. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  5. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    Science.gov (United States)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  6. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  7. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates.

    Science.gov (United States)

    Petkov, Christopher I; Jarvis, Erich D

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.

  8. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  9. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    Science.gov (United States)

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  10. The Attitudes and Motivation of Children towards Learning Rarely Spoken Foreign Languages: A Case Study from Saudi Arabia

    Science.gov (United States)

    Al-Nofaie, Haifa

    2018-01-01

    This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…

  11. Personality Structure in the Trait Lexicon of Hindi, a Major Language Spoken in India

    NARCIS (Netherlands)

    Singh, Jitendra K.; Misra, Girishwar; De Raad, Boele

    2013-01-01

    The psycho-lexical approach is extended to Hindi, a major language spoken in India. From both the dictionary and from Hindi novels, a huge set of personality descriptors was put together, ultimately reduced to a manageable set of 295 trait terms. Both self and peer ratings were collected on those

  12. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  13. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  14. Spoken language achieves robustness and evolvability by exploiting degeneracy and neutrality.

    Science.gov (United States)

    Winter, Bodo

    2014-10-01

    As with biological systems, spoken languages are strikingly robust against perturbations. This paper shows that languages achieve robustness in a way that is highly similar to many biological systems. For example, speech sounds are encoded via multiple acoustically diverse, temporally distributed and functionally redundant cues, characteristics that bear similarities to what biologists call "degeneracy". Speech is furthermore adequately characterized by neutrality, with many different tongue configurations leading to similar acoustic outputs, and different acoustic variants understood as the same by recipients. This highlights the presence of a large neutral network of acoustic neighbors for every speech sound. Such neutrality ensures that a steady backdrop of variation can be maintained without impeding communication, assuring that there is "fodder" for subsequent evolution. Thus, studying linguistic robustness is not only important for understanding how linguistic systems maintain their functioning upon the background of noise, but also for understanding the preconditions for language evolution. © 2014 WILEY Periodicals, Inc.

  15. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available Computer Science 81 ( 2016 ) 128 – 135 5th Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...

  16. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  17. Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

    Science.gov (United States)

    Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R

    This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant

  18. "We communicated that way for a reason": language practices and language ideologies among hearing adults whose parents are deaf.

    Science.gov (United States)

    Pizer, Ginger; Walters, Keith; Meier, Richard P

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."

  19. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  20. Deficits in narrative abilities in child British Sign Language users with specific language impairment.

    Science.gov (United States)

    Herman, Ros; Rowley, Katherine; Mason, Kathryn; Morgan, Gary

    2014-01-01

    This study details the first ever investigation of narrative skills in a group of 17 deaf signing children who have been diagnosed with disorders in their British Sign Language development compared with a control group of 17 deaf child signers matched for age, gender, education, quantity, and quality of language exposure and non-verbal intelligence. Children were asked to generate a narrative based on events in a language free video. Narratives were analysed for global structure, information content and local level grammatical devices, especially verb morphology. The language-impaired group produced shorter, less structured and grammatically simpler narratives than controls, with verb morphology particularly impaired. Despite major differences in how sign and spoken languages are articulated, narrative is shown to be a reliable marker of language impairment across the modality boundaries. © 2014 Royal College of Speech and Language Therapists.

  1. "Now We Have Spoken."

    Science.gov (United States)

    Zimmer, Patricia Moore

    2001-01-01

    Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…

  2. Prosodic Awareness and Punctuation Ability in Adult Readers

    Science.gov (United States)

    Heggie, Lindsay; Wade-Woolley, Lesly

    2018-01-01

    We examined the relationship between two metalinguistic tasks: prosodic awareness and punctuation ability. Specifically, we investigated whether adults' ability to punctuate was related to the degree to which they are aware of and able to manipulate prosody in spoken language. English-speaking adult readers (n = 115) were administered a receptive…

  3. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    Science.gov (United States)

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  4. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    Science.gov (United States)

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  5. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  6. The effect of written text on comprehension of spoken English as a foreign language.

    Science.gov (United States)

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.

  7. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  8. 45 CFR 1616.7 - Language ability.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Language ability. 1616.7 Section 1616.7 Public... § 1616.7 Language ability. In areas where a significant number of clients speak a language other than English as their principal language, a recipient shall adopt employment policies that insure that legal...

  9. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    Science.gov (United States)

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  10. Attentional Capture of Objects Referred to by Spoken Language

    Science.gov (United States)

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  11. Language mastery, narrative abilities and oral expression abilities in ...

    African Journals Online (AJOL)

    The importance of language and language mastery for science learning has been the object of extensive investigation in recent decades, leading to ample recognition. However, specific focus on the role of narrative abilities is still scarce. This work focuses on the relevance of narrative abilities for chemistry learning.

  12. Deaf children attending different school environments: sign language abilities and theory of mind.

    Science.gov (United States)

    Tomasuolo, Elena; Valeri, Giovanni; Di Renzo, Alessio; Pasqualetti, Patrizio; Volterra, Virginia

    2013-01-01

    The present study examined whether full access to sign language as a medium for instruction could influence performance in Theory of Mind (ToM) tasks. Three groups of Italian participants (age range: 6-14 years) participated in the study: Two groups of deaf signing children and one group of hearing-speaking children. The two groups of deaf children differed only in their school environment: One group attended a school with a teaching assistant (TA; Sign Language is offered only by the TA to a single deaf child), and the other group attended a bilingual program (Italian Sign Language and Italian). Linguistic abilities and understanding of false belief were assessed using similar materials and procedures in spoken Italian with hearing children and in Italian Sign Language with deaf children. Deaf children attending the bilingual school performed significantly better than deaf children attending school with the TA in tasks assessing lexical comprehension and ToM, whereas the performance of hearing children was in between that of the two deaf groups. As for lexical production, deaf children attending the bilingual school performed significantly better than the two other groups. No significant differences were found between early and late signers or between children with deaf and hearing parents.

  13. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  14. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  15. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    Science.gov (United States)

    Feenaughty, Lynda

    , correlation analysis revealed moderate relationships between neuropsychological test scores and speech hesitation measures, within the MSCI group. Slower information processing and poorer memory were significantly correlated with more silent pauses and poorer executive function was associated with fewer filled pauses in the Unfamiliar discourse task. Results have both clinical and theoretical implications. Overall, clinicians should demonstrate caution when interpreting global measures of speech timing and perceptual measures in the absence of information about cognitive ability. Results also have implications for a comprehensive model of spoken language incorporating cognitive, linguistic, and motor variables.

  16. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  17. Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J. P.

    2018-01-01

    Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…

  18. A step beyond local observations with a dialog aware bidirectional GRU network for Spoken Language Understanding

    OpenAIRE

    Vukotic , Vedran; Raymond , Christian; Gravier , Guillaume

    2016-01-01

    International audience; Architectures of Recurrent Neural Networks (RNN) recently become a very popular choice for Spoken Language Understanding (SLU) problems; however, they represent a big family of different architectures that can furthermore be combined to form more complex neural networks. In this work, we compare different recurrent networks, such as simple Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Gated Memory Units (GRU) and their bidirectional versions,...

  19. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    Science.gov (United States)

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  20. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    Science.gov (United States)

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  1. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  2. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    Science.gov (United States)

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  3. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  4. Music and Early Language Acquisition

    Science.gov (United States)

    Brandt, Anthony; Gebrian, Molly; Slevc, L. Robert

    2012-01-01

    Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability – one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development. PMID:22973254

  5. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  6. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    Science.gov (United States)

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  7. Use of spoken and written Japanese did not protect Japanese-American men from cognitive decline in late life.

    Science.gov (United States)

    Crane, Paul K; Gruhl, Jonathan C; Erosheva, Elena A; Gibbons, Laura E; McCurry, Susan M; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-11-01

    Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900-1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve.

  8. Use of Spoken and Written Japanese Did Not Protect Japanese-American Men From Cognitive Decline in Late Life

    Science.gov (United States)

    Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-01-01

    Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282

  9. Self-Ratings of Spoken Language Dominance: A Multilingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals

    Science.gov (United States)

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2012-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…

  10. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  11. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    Science.gov (United States)

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  12. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (Paudiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Sign language: an international handbook

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Woll, B.

    2012-01-01

    Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of

  14. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    Science.gov (United States)

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  15. User-Centred Design for Chinese-Oriented Spoken English Learning System

    Science.gov (United States)

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  16. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    Science.gov (United States)

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980

  17. Foreign body aspiration and language spoken at home: 10-year review.

    Science.gov (United States)

    Choroomi, S; Curotta, J

    2011-07-01

    To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.

  18. Host country language ability and expatriate adjustment

    DEFF Research Database (Denmark)

    Selmer, Jan; Lauring, Jakob

    2015-01-01

    countries, one with an easy, relatively simple language and the other with a difficult, highly complex language. Consistent with Goal-Setting Theory, results indicated a relative advantage of expatriates’ language ability in terms of their adjustment in the host country with the difficult language...

  19. How appropriate are the English language test requirements for non-UK-trained nurses? A qualitative study of spoken communication in UK hospitals.

    Science.gov (United States)

    Sedgwick, Carole; Garner, Mark

    2017-06-01

    Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical

  20. LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI

    Directory of Open Access Journals (Sweden)

    Süleyman Kaan YALÇIN (M.A.H.

    2008-12-01

    Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.

  1. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  2. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.J.; Swerts, M.G.J.; Theune, M.; Weegels, M.F.

    2001-01-01

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  3. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  4. Language Development in Children with Language Disorders: An Introduction to Skinner's Verbal Behavior and the Techniques for Initial Language Acquisition

    Science.gov (United States)

    Casey, Laura Baylot; Bicard, David F.

    2009-01-01

    Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…

  5. Modeling Longitudinal Changes in Older Adults’ Memory for Spoken Discourse: Findings from the ACTIVE Cohort

    Science.gov (United States)

    Payne, Brennan R.; Gross, Alden L.; Parisi, Jeanine M.; Sisco, Shannon M.; Stine-Morrow, Elizabeth A. L.; Marsiske, Michael; Rebok, George W.

    2014-01-01

    Episodic memory shows substantial declines with advancing age, but research on longitudinal trajectories of spoken discourse memory (SDM) in older adulthood is limited. Using parallel process latent growth curve models, we examined 10 years of longitudinal data from the no-contact control group (N = 698) of the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) randomized controlled trial in order to test (a) the degree to which SDM declines with advancing age, (b) predictors of these age-related declines, and (c) the within-person relationship between longitudinal changes in SDM and longitudinal changes in fluid reasoning and verbal ability over 10 years, independent of age. Individuals who were younger, White, had more years of formal education, were male, and had better global cognitive function and episodic memory performance at baseline demonstrated greater levels of SDM on average. However, only age at baseline uniquely predicted longitudinal changes in SDM, such that declines accelerated with greater age. Independent of age, within-person decline in reasoning ability over the 10-year study period was substantially correlated with decline in SDM (r = .87). An analogous association with SDM did not hold for verbal ability. The findings suggest that longitudinal declines in fluid cognition are associated with reduced spoken language comprehension. Unlike findings from memory for written prose, preserved verbal ability may not protect against developmental declines in memory for speech. PMID:24304364

  6. Bilateral versus unilateral cochlear implants in children: a study of spoken language outcomes.

    Science.gov (United States)

    Sarant, Julia; Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children's intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of screen time, and more time spent

  7. Bilateral Versus Unilateral Cochlear Implants in Children: A Study of Spoken Language Outcomes

    Science.gov (United States)

    Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of

  8. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  9. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  10. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    Science.gov (United States)

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  11. Who can communicate with whom? Language experience affects infants' evaluation of others as monolingual or multilingual.

    Science.gov (United States)

    Pitts, Casey E; Onishi, Kristine H; Vouloumanos, Athena

    2015-01-01

    Adults recognize that people can understand more than one language. However, it is unclear whether infants assume other people understand one or multiple languages. We examined whether monolingual and bilingual 20-month-olds expect an unfamiliar person to understand one or more than one language. Two speakers told a listener the location of a hidden object using either the same or two different languages. When different languages were spoken, monolinguals looked longer when the listener searched correctly, bilinguals did not; when the same language was spoken, both groups looked longer for incorrect searches. Infants rely on their prior language experience when evaluating the language abilities of a novel individual. Monolingual infants assume others can understand only one language, although not necessarily the infants' own; bilinguals do not. Infants' assumptions about which community of conventions people belong to may allow them to recognize effective communicative partners and thus opportunities to acquire language, knowledge, and culture. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Does it really matter whether students' contributions are spoken versus typed in an intelligent tutoring system with natural language?

    Science.gov (United States)

    D'Mello, Sidney K; Dowell, Nia; Graesser, Arthur

    2011-03-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.

  13. Ancestry and Language in the United States: November 1979. Current Population Reports, Special Studies. Series P-23. No. 116.

    Science.gov (United States)

    Levin, Michael J.; Sweet, Nancy S.

    Information on the ancestry, languages, and literacy of the U.S. population based on data collected by the Bureau of the Census in 1979 is reported. Items surveyed include ancestry, country of birth of the individual and parents, citizenship, year of immigration, native language, language spoken in the home, ability to speak English, and ability…

  14. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  15. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Science.gov (United States)

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633

  16. Comprehending Implied Meaning in English as a Foreign Language

    Science.gov (United States)

    Taguchi, Naoko

    2005-01-01

    This study investigated whether second language (L2) proficiency affects pragmatic comprehension, namely the ability to comprehend implied meaning in spoken dialogues, in terms of accuracy and speed of comprehension. Participants included 46 native English speakers at a U.S. university and 160 Japanese students of English in a college in Japan who…

  17. The role of early language abilities on math skills among Chinese children.

    Directory of Open Access Journals (Sweden)

    Juan Zhang

    Full Text Available The present study investigated the role of early language abilities in the development of math skills among Chinese K-3 students. About 2000 children in China, who were on average aged 6 years, were assessed for both informal math (e.g., basic number concepts such as counting objects and formal math (calculations including addition and subtraction skills, language abilities and nonverbal intelligence.Correlation analysis showed that language abilities were more strongly associated with informal than formal math skills, and regression analyses revealed that children's language abilities could uniquely predict both informal and formal math skills with age, gender, and nonverbal intelligence controlled. Mediation analyses demonstrated that the relationship between children's language abilities and formal math skills was partially mediated by informal math skills.The current findings indicate 1 Children's language abilities are of strong predictive values for both informal and formal math skills; 2 Language abilities impacts formal math skills partially through the mediation of informal math skills.

  18. Early Sign Language Exposure and Cochlear Implantation Benefits.

    Science.gov (United States)

    Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S

    2017-07-01

    Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.

  19. The Road to Language Learning Is Not Entirely Iconic: Iconicity, Neighborhood Density, and Frequency Facilitate Acquisition of Sign Language.

    Science.gov (United States)

    Caselli, Naomi K; Pyers, Jennie E

    2017-07-01

    Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.

  20. Child Modifiability as a Predictor of Language Abilities in Deaf Children Who Use American Sign Language.

    Science.gov (United States)

    Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary

    2015-08-01

    This research explored the use of dynamic assessment (DA) for language-learning abilities in signing deaf children from deaf and hearing families. Thirty-seven deaf children, aged 6 to 11 years, were identified as either stronger (n = 26) or weaker (n = 11) language learners according to teacher or speech-language pathologist report. All children received 2 scripted, mediated learning experience sessions targeting vocabulary knowledge—specifically, the use of semantic categories that were carried out in American Sign Language. Participant responses to learning were measured in terms of an index of child modifiability. This index was determined separately at the end of the 2 individual sessions. It combined ratings reflecting each child's learning abilities and responses to mediation, including social-emotional behavior, cognitive arousal, and cognitive elaboration. Group results showed that modifiability ratings were significantly better for stronger language learners than for weaker language learners. The strongest predictors of language ability were cognitive arousal and cognitive elaboration. Mediator ratings of child modifiability (i.e., combined score of social-emotional factors and cognitive factors) are highly sensitive to language-learning abilities in deaf children who use sign language as their primary mode of communication. This method can be used to design targeted interventions.

  1. The gender congruency effect during bilingual spoken-word recognition

    Science.gov (United States)

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  2. The relationship between spoken English proficiency and participation in higher education, employment and income from two Australian censuses.

    Science.gov (United States)

    Blake, Helen L; Mcleod, Sharynne; Verdon, Sarah; Fuller, Gail

    2018-04-01

    Proficiency in the language of the country of residence has implications for an individual's level of education, employability, income and social integration. This paper explores the relationship between the spoken English proficiency of residents of Australia on census day and their educational level, employment and income to provide insight into multilingual speakers' ability to participate in Australia as an English-dominant society. Data presented are derived from two Australian censuses i.e. 2006 and 2011 of over 19 million people. The proportion of Australians who reported speaking a language other than English at home was 21.5% in the 2006 census and 23.2% in the 2011 census. Multilingual speakers who also spoke English very well were more likely to have post-graduate qualifications, full-time employment and high income than monolingual English-speaking Australians. However, multilingual speakers who reported speaking English not well were much less likely to have post-graduate qualifications or full-time employment than monolingual English-speaking Australians. These findings provide insight into the socioeconomic and educational profiles of multilingual speakers, which will inform the understanding of people such as speech-language pathologists who provide them with support. The results indicate spoken English proficiency may impact participation in Australian society. These findings challenge the "monolingual mindset" by demonstrating that outcomes for multilingual speakers in education, employment and income are higher than for monolingual speakers.

  3. Native-language N400 and P600 predict dissociable language-learning abilities in adults

    Science.gov (United States)

    Qi, Zhenghan; Beach, Sara D.; Finn, Amy S.; Minas, Jennifer; Goetz, Calvin; Chan, Brian; Gabrieli, John D.E.

    2018-01-01

    Language learning aptitude during adulthood varies markedly across individuals. An individual’s native-language ability has been associated with success in learning a new language as an adult. However, little is known about how native-language processing affects learning success and what neural markers of native-language processing, if any, are related to success in learning. We therefore related variation in electrophysiology during native-language processing to success in learning a novel artificial language. Event-related potentials (ERPs) were recorded while native English speakers judged the acceptability of English sentences prior to learning an artificial language. There was a trend towards a double dissociation between native-language ERPs and their relationships to novel syntax and vocabulary learning. Individuals who exhibited a greater N400 effect when processing English semantics showed better future learning of the artificial language overall. The N400 effect was related to syntax learning via its specific relationship to vocabulary learning. In contrast, the P600 effect size when processing English syntax predicted future syntax learning but not vocabulary learning. These findings show that distinct neural signatures of native-language processing relate to dissociable abilities for learning novel semantic and syntactic information. PMID:27737775

  4. Serbian heritage language schools in the Netherlands through the eyes of the parents

    NARCIS (Netherlands)

    Palmen, Andrej

    It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the

  5. Language and Academic Abilities in Children with Selective Mutism

    Science.gov (United States)

    Nowakowski, Matilda E.; Cunningham, Charles E.; McHolm, Angela E.; Evans, Mary Ann; Edison, Shannon; St. Pierre, Jeff; Boyle, Michael H.; Schmidt, Louis A.

    2009-01-01

    We examined receptive language and academic abilities in children with selective mutism (SM; n = 30; M age = 8.8 years), anxiety disorders (n = 46; M age = 9.3 years), and community controls (n = 27; M age = 7.8 years). Receptive language and academic abilities were assessed using standardized tests completed in the laboratory. We found a…

  6. DIFFERENCES BETWEEN AMERICAN SIGN LANGUAGE (ASL AND BRITISH SIGN LANGUAGE (BSL

    Directory of Open Access Journals (Sweden)

    Zora JACHOVA

    2008-06-01

    Full Text Available In the communication of deaf people between them­selves and hearing people there are three ba­sic as­pects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The ges­ture and the move­ments pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them pho­nemes. He created three main phoneme catego­ries: hand position, location and movement.Sign languages as spoken languages have back­ground from the distant past. They developed par­allel with the development of spoken language and undertook many historical changes. Therefore, to­day they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.

  7. Rapid modulation of spoken word recognition by visual primes.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  8. Lexicon Optimization for Dutch Speech Recognition in Spoken Document Retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  9. Lexicon optimization for Dutch speech recognition in spoken document retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.; Dalsgaard, P.; Lindberg, B.; Benner, H.

    2001-01-01

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  10. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  11. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  12. Inuit Sign Language: a contribution to sign language typology

    NARCIS (Netherlands)

    Schuit, J.; Baker, A.; Pfau, R.

    2011-01-01

    Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different

  13. Native-language N400 and P600 predict dissociable language-learning abilities in adults.

    Science.gov (United States)

    Qi, Zhenghan; Beach, Sara D; Finn, Amy S; Minas, Jennifer; Goetz, Calvin; Chan, Brian; Gabrieli, John D E

    2017-04-01

    Language learning aptitude during adulthood varies markedly across individuals. An individual's native-language ability has been associated with success in learning a new language as an adult. However, little is known about how native-language processing affects learning success and what neural markers of native-language processing, if any, are related to success in learning. We therefore related variation in electrophysiology during native-language processing to success in learning a novel artificial language. Event-related potentials (ERPs) were recorded while native English speakers judged the acceptability of English sentences prior to learning an artificial language. There was a trend towards a double dissociation between native-language ERPs and their relationships to novel syntax and vocabulary learning. Individuals who exhibited a greater N400 effect when processing English semantics showed better future learning of the artificial language overall. The N400 effect was related to syntax learning via its specific relationship to vocabulary learning. In contrast, the P600 effect size when processing English syntax predicted future syntax learning but not vocabulary learning. These findings show that distinct neural signatures of native-language processing relate to dissociable abilities for learning novel semantic and syntactic information. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach

    Science.gov (United States)

    Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546

  15. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    Science.gov (United States)

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  16. Approaches for Language Identification in Mismatched Environments

    Science.gov (United States)

    2016-09-08

    domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...the process of identifying the language in a spoken speech utterance. In recent years, great improvements in LID system performance have been seen...be the case in practice. Lastly, we conduct an out-of-set experiment where VoA data from 9 other languages (Amharic, Creole, Croatian, English

  17. Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study.

    Science.gov (United States)

    Werfel, Krystal L

    2017-10-05

    The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

  18. Visual Sonority Modulates Infants' Attraction to Sign Language

    Science.gov (United States)

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  19. The strengths and weaknesses in verbal short-term memory and visual working memory in children with hearing impairment and additional language learning difficulties.

    Science.gov (United States)

    Willis, Suzi; Goldbart, Juliet; Stansfield, Jois

    2014-07-01

    To compare verbal short-term memory and visual working memory abilities of six children with congenital hearing-impairment identified as having significant language learning difficulties with normative data from typically hearing children using standardized memory assessments. Six children with hearing loss aged 8-15 years were assessed on measures of verbal short-term memory (Non-word and word recall) and visual working memory annually over a two year period. All children had cognitive abilities within normal limits and used spoken language as the primary mode of communication. The language assessment scores at the beginning of the study revealed that all six participants exhibited delays of two years or more on standardized assessments of receptive and expressive vocabulary and spoken language. The children with hearing-impairment scores were significantly higher on the non-word recall task than the "real" word recall task. They also exhibited significantly higher scores on visual working memory than those of the age-matched sample from the standardized memory assessment. Each of the six participants in this study displayed the same pattern of strengths and weaknesses in verbal short-term memory and visual working memory despite their very different chronological ages. The children's poor ability to recall single syllable words in relation to non-words is a clinical indicator of their difficulties in verbal short-term memory. However, the children with hearing-impairment do not display generalized processing difficulties and indeed demonstrate strengths in visual working memory. The poor ability to recall words, in combination with difficulties with early word learning may be indicators of children with hearing-impairment who will struggle to develop spoken language equal to that of their normally hearing peers. This early identification has the potential to allow for target specific intervention that may remediate their difficulties. Copyright © 2014. Published

  20. Oral and written language in late adulthood: findings from the Nun Study.

    Science.gov (United States)

    Mitzner, Tracy L; Kemper, Susan

    2003-01-01

    As a part of the Nun Study, a longitudinal investigation of aging and Alzheimer's disease, oral and written autobiographies from 118 older women were analyzed to examine the relationship between spoken and written language. The written language samples were more complex than the oral samples, both conceptually and grammatically. The relationship between the linguistic measures and participant characteristics was also examined. The results suggest that the grammatical and conceptual characteristics of oral and written language are affected by participant differences in education, cognitive status, and physical function and that written language samples have greater power than oral language samples to differentiate between high- and low-ability older adults.

  1. What You Don't Know Can Hurt You: The Risk of Language Deprivation by Impairing Sign Language Development in Deaf Children.

    Science.gov (United States)

    Hall, Wyatte C

    2017-05-01

    A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.

  2. Spoken sentence production in college students with dyslexia: working memory and vocabulary effects.

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J P

    2018-03-01

    Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.

  3. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  4. TEACHING TURKISH AS SPOKEN IN TURKEY TO TURKIC SPEAKERS - TÜRK DİLLİLERE TÜRKİYE TÜRKÇESİ ÖĞRETİMİ NASIL OLMALIDIR?

    Directory of Open Access Journals (Sweden)

    Ali TAŞTEKİN

    2015-12-01

    Full Text Available Attributing different titles to the activity of teaching Turkish to non-native speakers is related to the perspective of those who conduct this activity. If Turkish Language teaching centres are sub-units of Schools of Foreign Languages and Departments of Foreign Languages of our Universities or teachers have a foreign language background, then the title “Teaching Turkish as a Foreign Language” is adopted and claimed to be universal. In determining success at teaching and learning, the psychological perception of the educational activity and the associational power of the words used are far more important factors than the teacher, students, educational environment and educational tools. For this reason, avoiding the negative connotations of the adjective “foreign” in the activity of teaching foreigners Turkish as spoken in Turkey would be beneficial. In order for the activity of Teaching Turkish as Spoken in Turkey to Turkic Speakers to be successful, it is crucial to dwell on the formal and contextual quality of the books written for this purpose. Almost none of the course books and supplementary books in the field of teaching Turkish to non-native speakers has taken Teaching Turkish as Spoken in Turkey to Turkic Speakers into consideration. The books written for the purpose of teaching Turkish to non-speakers should be examined thoroughly in terms of content and method and should be organized in accordance with the purpose and level of readiness of the target audience. Activities of Teaching Turkish as Spoken in Turkey to Turkic Speakers are still conducted at public and private primary and secondary schools and colleges as well as private courses by self-educated teachers who are trained within a master-apprentice relationship. Turkic populations who had long been parted by necessity have found the opportunity to reunite and turn towards common objectives after the dissolution of The Union of Soviet Socialist Republics. This recent

  5. The road to language learning is iconic: evidence from British Sign Language.

    Science.gov (United States)

    Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella

    2012-12-01

    An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.

  6. On the Conventionalization of Mouth Actions in Australian Sign Language.

    Science.gov (United States)

    Johnston, Trevor; van Roekel, Jane; Schembri, Adam

    2016-03-01

    This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.

  7. MINORITY LANGUAGES IN ESTONIAN SEGREGATIVE LANGUAGE ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    Elvira Küün

    2011-01-01

    Full Text Available The goal of this project in Estonia was to determine what languages are spoken by students from the 2nd to the 5th year of basic school at their homes in Tallinn, the capital of Estonia. At the same time, this problem was also studied in other segregated regions of Estonia: Kohtla-Järve and Maardu. According to the database of the population census from the year 2000 (Estonian Statistics Executive Office's census 2000, there are representatives of 142 ethnic groups living in Estonia, speaking a total of 109 native languages. At the same time, the database doesn’t state which languages are spoken at homes. The material presented in this article belongs to the research topic “Home Language of Basic School Students in Tallinn” from years 2007–2008, specifically financed and ordered by the Estonian Ministry of Education and Research (grant No. ETF 7065 in the framework of an international study called “Multilingual Project”. It was determined what language is dominating in everyday use, what are the factors for choosing the language for communication, what are the preferred languages and language skills. This study reflects the actual trends of the language situation in these cities.

  8. The benefits of sign language for deaf learners with language challenges

    Directory of Open Access Journals (Sweden)

    Van Staden, Annalene

    2009-12-01

    Full Text Available This article argues the importance of allowing deaf children to acquire sign language from an early age. It demonstrates firstly that the critical/sensitive period hypothesis for language acquisition can be applied to specific language aspects of spoken language as well as sign languages (i.e. phonology, grammatical processing and syntax. This makes early diagnosis and early intervention of crucial importance. Moreover, research findings presented in this article demonstrate the advantage that sign language offers in the early years of a deaf child’s life by comparing the language development milestones of deaf learners exposed to sign language from birth to those of late-signers, orally trained deaf learners and hearing learners exposed to spoken language. The controversy over the best medium of instruction for deaf learners is briefly discussed, with emphasis placed on the possible value of bilingual-bicultural programmes to facilitate the development of deaf learners’ literacy skills. Finally, this paper concludes with a discussion of the implications/recommendations of sign language teaching and Deaf education in South Africa.

  9. Adaptation and Assessment of a Public Speaking Rating Scale

    Science.gov (United States)

    Iberri-Shea, Gina

    2017-01-01

    Prominent spoken language assessments such as the Oral Proficiency Interview and the Test of Spoken English have been primarily concerned with speaking ability as it relates to conversation. This paper looks at an additional aspect of spoken language ability, namely public speaking. This study used an adapted form of a public speaking rating scale…

  10. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  11. Phonological Sketch of the Sida Language of Luang Namtha, Laos

    Directory of Open Access Journals (Sweden)

    Nathan Badenoch

    2017-07-01

    Full Text Available This paper describes the phonology of the Sida language, a Tibeto-Burman language spoken by approximately 3,900 people in Laos and Vietnam. The data presented here are the variety spoken in Luang Namtha province of northwestern Laos, and focuses on a synchronic description of the fundamentals of the Sida phonological systems. Several issues of diachronic interest are also discussed in the context of the diversity of the Southern Loloish group of languages, many of which are spoken in Laos and have not yet been described in detail.

  12. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  13. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  14. How Does the Linguistic Distance Between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances During Verbal Memory Examination.

    Science.gov (United States)

    Taha, Haitham

    2017-06-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and phonologically similar version (PS). The result showed that for immediate free-recall, the performances were better for the SL and the PS conditions compared to the SA one. However, for the parts of delayed recall and recognition, the results did not reveal any significant consistent effect of diglossia. Accordingly, it was suggested that diglossia has a significant effect on the storage and short term memory functions but not on long term memory functions. The results were discussed in light of different approaches in the field of bilingual memory.

  15. Spoken Grammar for Chinese Learners

    Institute of Scientific and Technical Information of China (English)

    徐晓敏

    2013-01-01

    Currently, the concept of spoken grammar has been mentioned among Chinese teachers. However, teach-ers in China still have a vague idea of spoken grammar. Therefore this dissertation examines what spoken grammar is and argues that native speakers’ model of spoken grammar needs to be highlighted in the classroom teaching.

  16. Moving conceptualizations of language and literacy in SLA

    DEFF Research Database (Denmark)

    Laursen, Helle Pia

    in various technological environments, we see an increase in scholarship that highlights the mixing and chaining of spoken, written and visual modalities and how written and visual often precede or overrule spoken language. There seems to be a mismatch between current day language practices......, in language education and in language practices. As a consequence of this and in the light of the increasing mobility and linguistic diversity in Europe, in this colloquium, we address the need for a (re)conceptualization of the relation between language and literacy. Drawing on data from different settings...

  17. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  18. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  19. Conflict resolution abilities in children with Specific Language Impairment.

    Science.gov (United States)

    Paula, Erica Macêdo de; Befi-Lopes, Debora Maria

    2013-01-01

    To investigate the conflict resolution abilities of children with Specific Language Impairment, and to verify whether the time of speech-language therapy correlates to the performance on the conflict resolution task. Participants included 20 children with Specific Language Impairment (Research Group) and 40 children with normal language development (Control Group), with ages ranging from 7 years to 8 years and 11 months. To assess the conflict resolution abilities, five hypothetical contexts of conflict were presented. The strategies used by the children were classified and scored by the following levels: level 0 (solutions that do not match the other levels), level 1 (physical solutions), level 2 (unilateral solutions), level 3 (cooperative solutions), and level 4 (mutual solutions). Statistical analysis showed group effect for the variable total score. There was a difference between the groups for modal development level, with higher level of modal development observed in the Control Group. There was no correlation between the period of speech-language therapy attendance and the total score. Children with Specific Language Impairment present difficulties in solving problems, in view of the fact that they mainly use physical and unilateral strategies. There was no correlation between the time of speech-language therapy and performance in the task.

  20. The semantic associative ability in preschoolers with different age of language onset

    Directory of Open Access Journals (Sweden)

    Dina Di Giacomo

    2016-07-01

    Full Text Available Aim of the study is to verify the semantic associative abilities in children with different language onset times: early, typical, and delayed talkers. The study was conducted on the sample of 74 preschool children who performed a Perceptual Associative Task, in order to evaluate the ability to link concepts by four associative strategies (function, part/whole, contiguity, and superordinate strategies. The results evidenced that the children with delayed language onset performed significantly better than the children with early language production. No difference was found between typical and delayed language groups. Our results showed that the children with early language onset presented weakness in the flexibility of elaboration of the concepts. The typical and delayed language onset groups overlapped performance in the associative abilities. The time of language onset appeared to be a predictive factor in the use of semantic associative strategies; the early talkers might present a slow pattern of conceptual processing, whereas the typical and late talkers may have protective factors.

  1. Emergent name-writing abilities of preschool-age children with language impairment.

    Science.gov (United States)

    Cabell, Sonia Q; Justice, Laura M; Zucker, Tricia A; McGinty, Anita S

    2009-01-01

    The 2 studies reported in this manuscript collectively address 3 aims: (a) to characterize the name-writing abilities of preschool-age children with language impairment (LI), (b) to identify those emergent literacy skills that are concurrently associated with name-writing abilities, and (c) to compare the name-writing abilities of children with LI to those of their typical language (TL) peers. Fifty-nine preschool-age children with LI were administered a battery of emergent literacy and language assessments, including a task in which the children were asked to write their first names. A subset of these children (n=23) was then compared to a TL-matched sample to characterize performance differences. Results showed that the name-writing abilities of preschoolers with LI were associated with skills in alphabet knowledge and print concepts. Hierarchical multiple regression analysis indicated that only alphabet knowledge uniquely contributed to the variance in concurrent name-writing abilities. In the matched comparison, the TL group demonstrated significantly more advanced name-writing representations than the LI group. Children with LI lag significantly behind their TL peers in name-writing abilities. Speech-language pathologists are encouraged to address the print-related skills of children with LI within their clinical interventions.

  2. Four and twenty blackbirds : How transcoding ability mediates the relationship between visuospatial working memory and math in a language with inversion

    NARCIS (Netherlands)

    van der Ven, S.H.G.; Klaiber, J.D.; van der Maas, H.L.J.

    2017-01-01

    Writing down spoken number words (transcoding) is an ability that is predictive of math performance and related to working memory ability. We analysed these relationships in a large sample of over 25,000 children, from kindergarten to the end of primary school, who solved transcoding items with a

  3. Monitoring the Performance of Human and Automated Scores for Spoken Responses

    Science.gov (United States)

    Wang, Zhen; Zechner, Klaus; Sun, Yu

    2018-01-01

    As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…

  4. Bimodal Bilingual Language Development of Hearing Children of Deaf Parents

    Science.gov (United States)

    Hofmann, Kristin; Chilla, Solveig

    2015-01-01

    Adopting a bimodal bilingual language acquisition model, this qualitative case study is the first in Germany to investigate the spoken and sign language development of hearing children of deaf adults (codas). The spoken language competence of six codas within the age range of 3;10 to 6;4 is assessed by a series of standardised tests (SETK 3-5,…

  5. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    Science.gov (United States)

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  6. Teaching and Learning Sign Language as a “Foreign” Language ...

    African Journals Online (AJOL)

    In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...

  7. Implications of Hegel's Theories of Language on Second Language Teaching

    Science.gov (United States)

    Wu, Manfred

    2016-01-01

    This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…

  8. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  9. Stuttering and Language Ability in Children: Questioning the Connection

    Science.gov (United States)

    Nippold, Marilyn A.

    2012-01-01

    Purpose: This article explains why it is reasonable to question the view that stuttering and language ability in children are linked--the so-called "stuttering-language connection." Method: Studies that focused on syntactic, morphologic, and lexical development in children who stutter (CWS) are examined for evidence to support the following…

  10. Language and Disadvantage: A Comparison of the Language Abilities of Adolescents from Two Different Socioeconomic Areas

    Science.gov (United States)

    Spencer, Sarah; Clegg, Judy; Stackhouse, Joy

    2012-01-01

    Background: It is recognized that children from areas associated with socioeconomic disadvantage are at an increased risk of delayed language development. However, so far research has focused mainly on young children and there has been little investigation into language development in adolescence. Aims: To investigate the language abilities of…

  11. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  12. INDIVIDUAL AND PSYCHOLOGICAL PECULIARITIES OF TRANSLATING AS A LANGUAGE ABILITIES COMPONENT

    Directory of Open Access Journals (Sweden)

    Natalia Ya Bolshunova

    2017-12-01

    Full Text Available The article addresses the differential-psychological aspect of translating abilities as a component of language abilities. The peculiarity of translation is described including both linguistic and paralinguistic aspects of translating a content and a sense from one language into another accompanied by linguistic and cognitive actions. A variety of individual and psychological peculiarities of translation based on the translation dominant were revealed. It was demonstrated that these peculiarities are relevant to communicative and linguistic types of language abilities discovered byM.K. Kabardov. Valid assessment methods such as M.N. Borisova’s test for investigation “artistic” and “thinking” types of Higher Nervous Activity (HNA, D. Wechsler’ test of verbal and nonverbal intelligence, and a test developed by the authors of the article for individual specificity of interpreter’s activity as communicative and linguistic types of translating abilities assessment were used. The results suggest that all the typological differences are based on special human types of HNA. Subjects displaying the “thinking” type use linguistic methods when translating, whereas subjects displaying the “artistic” type try to use their own subjective life experience and extralinguistic methods when translating foreign language constructions. Extreme subjects of both types try to use the most developed components of their special abilities in order to compensate the components of the other type which are not well developed to accomplish some language tasks. In this case subjects of both types can fulfill these tasks rather successfully.

  13. Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language

    Science.gov (United States)

    Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary

    2015-01-01

    Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…

  14. Componential Skills in Second Language Development of Bilingual Children with Specific Language Impairment

    Science.gov (United States)

    Verhoeven, Ludo; Steenge, Judit; van Leeuwe, Jan; van Balkom, Hans

    2017-01-01

    In this study, we investigated which componential skills can be distinguished in the second language (L2) development of 140 bilingual children with specific language impairment in the Netherlands, aged 6-11 years, divided into 3 age groups. L2 development was assessed by means of spoken language tasks representing different language skills…

  15. Working memory affects older adults' use of context in spoken-word recognition.

    Science.gov (United States)

    Janse, Esther; Jesse, Alexandra

    2014-01-01

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

  16. Language configurations of degree-related denotations in the spoken production of a group of Colombian EFL university students: A corpus-based study

    Directory of Open Access Journals (Sweden)

    Wilder Yesid Escobar

    2015-05-01

    Full Text Available Recognizing that developing the competences needed to appropriately use linguistic resources according to contextual characteristics (pragmatics is as important as the cultural-imbedded linguistic knowledge itself (semantics and that both are equally essential to form competent speakers of English in foreign language contexts, we feel this research relies on corpus linguistics to analyze both the scope and the limitations of the sociolinguistic knowledge and the communicative skills of English students at the university level. To such end, a linguistic corpus was assembled, compared to an existing corpus of native speakers, and analyzed in terms of the frequency, overuse, underuse, misuse, ambiguity, success, and failure of the linguistic parameters used in speech acts. The findings herein describe the linguistic configurations employed to modify levels and degrees of descriptions (salient sematic theme exhibited in the EFL learners´ corpus appealing to the sociolinguistic principles governing meaning making and language use which are constructed under the social conditions of the environments where the language is naturally spoken for sociocultural exchange.

  17. Visual statistical learning is related to natural language ability in adults: An ERP study.

    Science.gov (United States)

    Daltrozzo, Jerome; Emerson, Samantha N; Deocampo, Joanne; Singh, Sonia; Freggens, Marjorie; Branum-Martin, Lee; Conway, Christopher M

    2017-03-01

    Statistical learning (SL) is believed to enable language acquisition by allowing individuals to learn regularities within linguistic input. However, neural evidence supporting a direct relationship between SL and language ability is scarce. We investigated whether there are associations between event-related potential (ERP) correlates of SL and language abilities while controlling for the general level of selective attention. Seventeen adults completed tests of visual SL, receptive vocabulary, grammatical ability, and sentence completion. Response times and ERPs showed that SL is related to receptive vocabulary and grammatical ability. ERPs indicated that the relationship between SL and grammatical ability was independent of attention while the association between SL and receptive vocabulary depended on attention. The implications of these dissociative relationships in terms of underlying mechanisms of SL and language are discussed. These results further elucidate the cognitive nature of the links between SL mechanisms and language abilities. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Brain metabolite levels and language abilities in preschool children.

    Science.gov (United States)

    Lebel, Catherine; MacMaster, Frank P; Dewey, Deborah

    2016-10-01

    Language acquisition occurs rapidly during early childhood and lays the foundation for future reading success. However, little is known about the brain-language relationships in young children. The goal of this study was to investigate relationships between brain metabolites and prereading language abilities in healthy preschool-aged children. Participants were 67 healthy children aged 3.0-5.4 years scanned on a 3T GE MR750w MRI scanner using short echo proton spectroscopy with a voxel placed in the anterior cingulate gyrus ( n  = 56) and/or near the left angular gyrus ( n  = 45). Children completed the NEPSY-II Phonological Processing and Speeded Naming subtests at the same time as their MRI scan. We calculated glutamate, glutamine, creatine/phosphocreatine, choline, inositol, and NAA concentrations, and correlated these with language skills. In the anterior cingulate, Phonological Processing Scaled Scores were significantly correlated with glutamate, creatine, and inositol concentrations. In the left angular gyrus, Speeded Naming Combined Scaled Scores showed trend correlations with choline and glutamine concentrations. For the first time, we demonstrate relationships between brain metabolites and prereading language abilities in young children. Our results show relationships between language and inositol and glutamate that may reflect glial differences underlying language function, and a relationship of language with creatine. The trend between Speeded Naming and choline is consistent with previous research in older children and adults; however, larger sample sizes are needed to confirm whether this relationship is indeed significant in young children. These findings help understand the brain basis of language, and may ultimately lead to earlier and more effective interventions for reading disabilities.

  19. Gradient language dominance affects talker learning.

    Science.gov (United States)

    Bregman, Micah R; Creel, Sarah C

    2014-01-01

    Traditional conceptions of spoken language assume that speech recognition and talker identification are computed separately. Neuropsychological and neuroimaging studies imply some separation between the two faculties, but recent perceptual studies suggest better talker recognition in familiar languages than unfamiliar languages. A familiar-language benefit in talker recognition potentially implies strong ties between the two domains. However, little is known about the nature of this language familiarity effect. The current study investigated the relationship between speech and talker processing by assessing bilingual and monolingual listeners' ability to learn voices as a function of language familiarity and age of acquisition. Two effects emerged. First, bilinguals learned to recognize talkers in their first language (Korean) more rapidly than they learned to recognize talkers in their second language (English), while English-speaking participants showed the opposite pattern (learning English talkers faster than Korean talkers). Second, bilinguals' learning rate for talkers in their second language (English) correlated with age of English acquisition. Taken together, these results suggest that language background materially affects talker encoding, implying a tight relationship between speech and talker representations. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Identification of four class emotion from Indonesian spoken language using acoustic and lexical features

    Science.gov (United States)

    Kasyidi, Fatan; Puji Lestari, Dessi

    2018-03-01

    One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.

  1. Auditory-cognitive training improves language performance in prelingually deafened cochlear implant recipients.

    Science.gov (United States)

    Ingvalson, Erin M; Young, Nancy M; Wong, Patrick C M

    2014-10-01

    Phonological and working memory skills have been shown to be important for the development of spoken language. Children who use a cochlear implant (CI) show performance deficits relative to normal hearing (NH) children on all constructs: phonological skills, working memory, and spoken language. Given that phonological skills and working memory have been shown to be important for spoken language development in NH children, we hypothesized that training these foundational skills would result in improved spoken language performance in CI-using children. Nineteen prelingually deafened CI-using children aged 4- to 7-years-old participated. All children had been using their implants for at least one year and were matched on pre-implant hearing thresholds, hearing thresholds at study enrollment, and non-verbal IQ. Children were assessed on expressive vocabulary, listening language, spoken language, and composite language. Ten children received four weeks of training on phonological skills including rhyme, sound blending, and sound discrimination and auditory working memory. The remaining nine children continued with their normal classroom activities for four weeks. Language assessments were repeated following the training/control period. Children who received combined phonological-working memory training showed significant gains on expressive and composite language scores. Children who did not receive training showed no significant improvements at post-test. On average, trained children had gain scores of 6.35 points on expressive language and gain scores of 6.15 points whereas the untrained children had test-retest gain scores of 2.89 points for expressive language and 2.56 for composite language. Our results suggest that training to improve the phonological and working memory skills in CI-using children may lead to improved language performance. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Speaker Input Variability Does Not Explain Why Larger Populations Have Simpler Languages.

    Science.gov (United States)

    Atkinson, Mark; Kirby, Simon; Smith, Kenny

    2015-01-01

    A learner's linguistic input is more variable if it comes from a greater number of speakers. Higher speaker input variability has been shown to facilitate the acquisition of phonemic boundaries, since data drawn from multiple speakers provides more information about the distribution of phonemes in a speech community. It has also been proposed that speaker input variability may have a systematic influence on individual-level learning of morphology, which can in turn influence the group-level characteristics of a language. Languages spoken by larger groups of people have less complex morphology than those spoken in smaller communities. While a mechanism by which the number of speakers could have such an effect is yet to be convincingly identified, differences in speaker input variability, which is thought to be larger in larger groups, may provide an explanation. By hindering the acquisition, and hence faithful cross-generational transfer, of complex morphology, higher speaker input variability may result in structural simplification. We assess this claim in two experiments which investigate the effect of such variability on language learning, considering its influence on a learner's ability to segment a continuous speech stream and acquire a morphologically complex miniature language. We ultimately find no evidence to support the proposal that speaker input variability influences language learning and so cannot support the hypothesis that it explains how population size determines the structural properties of language.

  3. Genome-Wide Association Study of Receptive Language Ability of 12-Year-Olds

    Science.gov (United States)

    Harlaar, Nicole; Meaburn, Emma L.; Hayiou-Thomas, Marianna E.; Davis, Oliver S. P.; Docherty, Sophia; Hanscombe, Ken B.; Haworth, Claire M. A.; Price, Thomas S.; Trzaskowski, Maciej; Dale, Philip S.; Plomin, Robert

    2014-01-01

    Purpose: Researchers have previously shown that individual differences in measures of receptive language ability at age 12 are highly heritable. In the current study, the authors attempted to identify some of the genes responsible for the heritability of receptive language ability using a "genome-wide association" approach. Method: The…

  4. Cultural views, language ability, and mammography use in Chinese American women.

    Science.gov (United States)

    Liang, Wenchi; Wang, Judy; Chen, Mei-Yuh; Feng, Shibao; Yi, Bin; Mandelblatt, Jeanne S

    2009-12-01

    Mammography screening rates among Chinese American women have been reported to be low. This study examines whether and how culture views and language ability influence mammography adherence in this mostly immigrant population. Asymptomatic Chinese American women (n = 466) aged 50 and older, recruited from the Washington, D.C. area, completed a telephone interview. Regular mammography was defined as having two mammograms at age-appropriate recommended intervals. Cultural views were assessed by 30 items, and language ability measured women's ability in reading, writing, speaking, and listening to English. After controlling for risk perception, worry, physician recommendation, family encouragement, and access barriers, women holding a more Chinese/Eastern cultural view were significantly less likely to have had regular mammograms than those having a Western cultural view. English ability was positively associated with mammography adherence. The authors' results imply that culturally sensitive and language-appropriate educational interventions are likely to improve mammography adherence in this population.

  5. The comprehension skills of children learning English as an additional language.

    Science.gov (United States)

    Burgoyne, K; Kelly, J M; Whiteley, H E; Spooner, A

    2009-12-01

    Data from national test results suggests that children who are learning English as an additional language (EAL) experience relatively lower levels of educational attainment in comparison to their monolingual, English-speaking peers. The relative underachievement of children who are learning EAL demands that the literacy needs of this group are identified. To this end, this study aimed to explore the reading- and comprehension-related skills of a group of EAL learners. Data are reported from 92 Year 3 pupils, of whom 46 children are learning EAL. Children completed standardized measures of reading accuracy and comprehension, listening comprehension, and receptive and expressive vocabulary. Results indicate that many EAL learners experience difficulties in understanding written and spoken text. These comprehension difficulties are not related to decoding problems but are related to significantly lower levels of vocabulary knowledge experienced by this group. Many EAL learners experience significantly lower levels of English vocabulary knowledge which has a significant impact on their ability to understand written and spoken text. Greater emphasis on language development is therefore needed in the school curriculum to attempt to address the limited language skills of children learning EAL.

  6. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  7. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  8. RECEPTION OF SPOKEN ENGLISH. MISHEARINGS IN THE LANGUAGE OF BUSINESS AND LAW

    Directory of Open Access Journals (Sweden)

    HOREA Ioana-Claudia

    2013-07-01

    Full Text Available Spoken English may sometimes cause us to face a peculiar problem in respect of the reception and the decoding of auditive signals, which might lead to mishearings. Risen from erroneous perception, from a lack in understanding the communication and an involuntary mental replacement of a certain element or structure by a more familiar one, these mistakes are most frequently encountered in the case of listening to songs, where the melodic line can facilitate the development of confusion by its somewhat altered intonation, which produces the so called mondegreens. Still, instances can be met in all domains of verbal communication, as proven in several examples noticed during classes of English as a foreign language (EFL taught to non-philological subjects. Production and perceptions of language depend on a series of elements that influence the encoding and the decoding of the message. These filters belong to both psychological and semantic categories which can either interfere with the accuracy of emission and reception. Poor understanding of a notion or concept combined with a more familiar relation with a similarly sounding one will result in unconsciously picking the structure which is better known. This means ‘hearing’ something else than it had been said, something closer to the receiver’s preoccupations and baggage of knowledge than the original structure or word. Some mishearings become particularly relevant as they concern teaching English for Specific Purposes (ESP. Such are those encountered during classes of Business English or in English for Law. Though not very likely to occur too often, given an intuitively felt inaccuracy - as the terms are known by the users to need to be more specialised -, such examples are still not ignorable. Thus, we consider they deserve a higher degree of attention, as they might become quite relevant in the global context of an increasing work force migration and a spread of multinational companies.

  9. Infant Statistical-Learning Ability Is Related to Real-Time Language Processing

    Science.gov (United States)

    Lany, Jill; Shoaib, Amber; Thompson, Abbie; Estes, Katharine Graf

    2018-01-01

    Infants are adept at learning statistical regularities in artificial language materials, suggesting that the ability to learn statistical structure may support language development. Indeed, infants who perform better on statistical learning tasks tend to be more advanced in parental reports of infants' language skills. Work with adults suggests…

  10. Digital Language Death

    Science.gov (United States)

    Kornai, András

    2013-01-01

    Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559

  11. Digital language death.

    Directory of Open Access Journals (Sweden)

    András Kornai

    Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.

  12. The role of planum temporale in processing accent variation in spoken language comprehension.

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a

  13. Spanish as a Second Language when L1 Is Quechua: Endangered Languages and the SLA Researcher

    Science.gov (United States)

    Kalt, Susan E.

    2012-01-01

    Spanish is one of the most widely spoken languages in the world. Quechua is the largest indigenous language family to constitute the first language (L1) of second language (L2) Spanish speakers. Despite sheer number of speakers and typologically interesting contrasts, Quechua-Spanish second language acquisition is a nearly untapped research area,…

  14. Phonological working memory and its relationship with language abilities in children with cochlear implants

    Directory of Open Access Journals (Sweden)

    Fatemeh Haresabadi

    2014-12-01

    Full Text Available Background and Aim: Many studies have demonstrated a close relationship between phonological working memory and language abilities in normal children and children with language developmental disorders, such as those with cochlear implants. A review of these studies would clarify communication and learning in such children and provide more comprehensive information regarding their education and treatment. In this study, the characteristics of phonological working memory and its relationship with language abilities in children with cochlear implants was examined.Recent Findings: In this study, the authors studied the characteristics of phonological working memory and its relationship with language abilities of children with cochlear implants. These studies showed that in addition to demographic variables, phonological working memory is a factor that affects language development in children with cochlear implants. Children with cochlear implants typically have a shorter memory span.Conclusion: It is thought that the deficiency in primary auditory sensory input and language stimulation caused by difficulties in the processing and rehearsal of auditory information in phonological working memory is the main cause of the short memory span in such children. Conversely, phonological working memory problems may have adverse effects on the language abilities in such children. Therefore, to provide comprehensive and appropriate treatment for children with cochlear implants, the reciprocal relationship between language abilities and phonological working memory should be considered.

  15. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    Science.gov (United States)

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  16. The role of planum temporale in processing accent variation in spoken language comprehension

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and

  17. Language Ability Predicts Cortical Structure and Covariance in Boys with Autism Spectrum Disorder.

    Science.gov (United States)

    Sharda, Megha; Foster, Nicholas E V; Tryfon, Ana; Doyle-Thomas, Krissy A R; Ouimet, Tia; Anagnostou, Evdokia; Evans, Alan C; Zwaigenbaum, Lonnie; Lerch, Jason P; Lewis, John D; Hyde, Krista L

    2017-03-01

    There is significant clinical heterogeneity in language and communication abilities of individuals with Autism Spectrum Disorders (ASD). However, no consistent pathology regarding the relationship of these abilities to brain structure has emerged. Recent developments in anatomical correlation-based approaches to map structural covariance networks (SCNs), combined with detailed behavioral characterization, offer an alternative for studying these relationships. In this study, such an approach was used to study the integrity of SCNs of cortical thickness and surface area associated with language and communication, in 46 high-functioning, school-age children with ASD compared with 50 matched, typically developing controls (all males) with IQ > 75. Findings showed that there was alteration of cortical structure and disruption of fronto-temporal cortical covariance in ASD compared with controls. Furthermore, in an analysis of a subset of ASD participants, alterations in both cortical structure and covariance were modulated by structural language ability of the participants, but not communicative function. These findings indicate that structural language abilities are related to altered fronto-temporal cortical covariance in ASD, much more than symptom severity or cognitive ability. They also support the importance of better characterizing ASD samples while studying brain structure and for better understanding individual differences in language and communication abilities in ASD. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. The Two-Systems Account of Theory of Mind: Testing the Links to Social- Perceptual and Cognitive Abilities

    Directory of Open Access Journals (Sweden)

    Bozana Meinhardt-Injac

    2018-01-01

    Full Text Available According to the two-systems account of theory of mind (ToM, understanding mental states of others involves both fast social-perceptual processes, as well as slower, reflexive cognitive operations (Frith and Frith, 2008; Apperly and Butterfill, 2009. To test the respective roles of specific abilities in either of these processes we administered 15 experimental procedures to a large sample of 343 participants, testing ability in face recognition and holistic perception, language, and reasoning. ToM was measured by a set of tasks requiring ability to track and to infer complex emotional and mental states of others from faces, eyes, spoken language, and prosody. We used structural equation modeling to test the relative strengths of a social-perceptual (face processing related and reflexive-cognitive (language and reasoning related path in predicting ToM ability. The two paths accounted for 58% of ToM variance, thus validating a general two-systems framework. Testing specific predictor paths revealed language and face recognition as strong and significant predictors of ToM. For reasoning, there were neither direct nor mediated effects, albeit reasoning was strongly associated with language. Holistic face perception also failed to show a direct link with ToM ability, while there was a mediated effect via face recognition. These results highlight the respective roles of face recognition and language for the social brain, and contribute closer empirical specification of the general two-systems account.

  19. The Two-Systems Account of Theory of Mind: Testing the Links to Social- Perceptual and Cognitive Abilities.

    Science.gov (United States)

    Meinhardt-Injac, Bozana; Daum, Moritz M; Meinhardt, Günter; Persike, Malte

    2018-01-01

    According to the two-systems account of theory of mind (ToM), understanding mental states of others involves both fast social-perceptual processes, as well as slower, reflexive cognitive operations (Frith and Frith, 2008; Apperly and Butterfill, 2009). To test the respective roles of specific abilities in either of these processes we administered 15 experimental procedures to a large sample of 343 participants, testing ability in face recognition and holistic perception, language, and reasoning. ToM was measured by a set of tasks requiring ability to track and to infer complex emotional and mental states of others from faces, eyes, spoken language, and prosody. We used structural equation modeling to test the relative strengths of a social-perceptual (face processing related) and reflexive-cognitive (language and reasoning related) path in predicting ToM ability. The two paths accounted for 58% of ToM variance, thus validating a general two-systems framework. Testing specific predictor paths revealed language and face recognition as strong and significant predictors of ToM. For reasoning, there were neither direct nor mediated effects, albeit reasoning was strongly associated with language. Holistic face perception also failed to show a direct link with ToM ability, while there was a mediated effect via face recognition. These results highlight the respective roles of face recognition and language for the social brain, and contribute closer empirical specification of the general two-systems account.

  20. Limited english proficiency, primary language at home, and disparities in children's health care: how language barriers are measured matters.

    Science.gov (United States)

    Flores, Glenn; Abreu, Milagros; Tomany-Korman, Sandra C

    2005-01-01

    Approximately 3.5 million U.S. schoolchildren are limited in English proficiency (LEP). Disparities in children's health and health care are associated with both LEP and speaking a language other than English at home, but prior research has not examined which of these two measures of language barriers is most useful in examining health care disparities. Our objectives were to compare primary language spoken at home vs. parental LEP and their associations with health status, access to care, and use of health services in children. We surveyed parents at urban community sites in Boston, asking 74 questions on children's health status, access to health care, and use of health services. Some 98% of the 1,100 participating children and families were of non-white race/ethnicity, 72% of parents were LEP, and 13 different primary languages were spoken at home. "Dose-response" relationships were observed between parental English proficiency and several child and parental sociodemographic features, including children's insurance coverage, parental educational attainment, citizenship and employment, and family income. Similar "dose-response" relationships were noted between the primary language spoken at home and many but not all of the same sociodemographic features. In multivariate analyses, LEP parents were associated with triple the odds of a child having fair/poor health status, double the odds of the child spending at least one day in bed for illness in the past year, and significantly greater odds of children not being brought in for needed medical care for six of nine access barriers to care. None of these findings were observed in analyses of the primary language spoken at home. Individual parental LEP categories were associated with different risks of adverse health status and outcomes. Parental LEP is superior to the primary language spoken at home as a measure of the impact of language barriers on children's health and health care. Individual parental LEP

  1. Gesture-speech integration in children with specific language impairment.

    Science.gov (United States)

    Mainela-Arnold, Elina; Alibali, Martha W; Hostetter, Autumn B; Evans, Julia L

    2014-11-01

    Previous research suggests that speakers are especially likely to produce manual communicative gestures when they have relative ease in thinking about the spatial elements of what they are describing, paired with relative difficulty organizing those elements into appropriate spoken language. Children with specific language impairment (SLI) exhibit poor expressive language abilities together with within-normal-range nonverbal IQs. This study investigated whether weak spoken language abilities in children with SLI influence their reliance on gestures to express information. We hypothesized that these children would rely on communicative gestures to express information more often than their age-matched typically developing (TD) peers, and that they would sometimes express information in gestures that they do not express in the accompanying speech. Participants were 15 children with SLI (aged 5;6-10;0) and 18 age-matched TD controls. Children viewed a wordless cartoon and retold the story to a listener unfamiliar with the story. Children's gestures were identified and coded for meaning using a previously established system. Speech-gesture combinations were coded as redundant if the information conveyed in speech and gesture was the same, and non-redundant if the information conveyed in speech was different from the information conveyed in gesture. Children with SLI produced more gestures than children in the TD group; however, the likelihood that speech-gesture combinations were non-redundant did not differ significantly across the SLI and TD groups. In both groups, younger children were significantly more likely to produce non-redundant speech-gesture combinations than older children. The gesture-speech integration system functions similarly in children with SLI and TD, but children with SLI rely more on gesture to help formulate, conceptualize or express the messages they want to convey. This provides motivation for future research examining whether interventions

  2. Verbal short-term memory development and spoken language outcomes in deaf children with cochlear implants.

    Science.gov (United States)

    Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B

    2013-01-01

    Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures

  3. Reading the Surface: Body Language and Surveillance

    Directory of Open Access Journals (Sweden)

    Mark Andrejevic

    2010-03-01

    Full Text Available This article explores the role played by body language in recent examples of popular culture and political news coverage as a means of highlighting the poten-tially deceptive haracter of speech and promising to bypass it altogether. It situ-ates the promise of "visceral literacy" - the alleged ability to read inner emotions and dispositions - within emerging surveillance practices and the landscapes of risk they navigate. At the same time, it describes portrayals of body language analysis as characteristic of an emerging genre of "securitainment" that instructs viewers in monitoring techniques as it entertains and informs them. Body lan-guage ends up caught in the symbolic impasse it sought to avoid: as soon as it is portrayed as a language that can be learned and consciously "spoken" it falls prey to the potential for deceit. The article's conclusion considers the way in which emerging technologies attempt to address this impasse, bypassing the attempt to infer underlying signification altogether.

  4. Lecturing in one’s first language or in English as a lingua franca

    DEFF Research Database (Denmark)

    Preisler, Bent

    2014-01-01

    The demand for internationalization puts pressure on Danish universities to use English as the language of instruction instead of or in addition to the local language(s). The purpose of this study – though proceeding from the belief that true internationalization seeks to exploit all linguistic...... and multilingual classroom. This case study concerns Danish university teachers' spoken discourse and interaction with students in a Danish-language versus English-language classroom. The data are video recordings of classroom interaction at the University of Roskilde, Denmark. The focus is on the relationship...... between linguistic-pragmatic performance and academic authenticity for university teachers teaching courses in both English and Danish, based on recent sociolinguistic concepts such as “persona,” “stylization,” and “authenticity.” The analysis suggests that it is crucial for teachers' ability...

  5. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  6. Parental mode of communication is essential for speech and language outcomes in cochlear implanted children

    DEFF Research Database (Denmark)

    Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina

    2010-01-01

    The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....

  7. Reduced sensitivity to context in language comprehension: A characteristic of Autism Spectrum Disorders or of poor structural language ability?

    Science.gov (United States)

    Eberhardt, Melanie; Nadig, Aparna

    2018-01-01

    We present two experiments examining the universality and uniqueness of reduced context sensitivity in language processing in Autism Spectrum Disorders (ASD), as proposed by the Weak Central Coherence account (Happé & Frith, 2006, Journal of Autism and Developmental Disorders, 36(1), 25). That is, do all children with ASD exhibit decreased context sensitivity, and is this characteristic specific to ASD versus other neurodevelopmental conditions? Experiment 1, conducted in English, was a comparison of children with ASD with normal language and their typically-developing peers on a picture selection task where interpretation of sentential context was required to identify homonyms. Contrary to the predictions of Weak Central Coherence, the ASD-normal language group exhibited no difficulty on this task. Experiment 2, conducted in German, compared children with ASD with variable language abilities, typically-developing children, and a second control group of children with Language Impairment (LI) on a sentence completion task where a context sentence had to be considered to produce the continuation of an ambiguous sentence fragment. Both ASD-variable language and LI groups exhibited reduced context sensitivity and did not differ from each other. Finally, to directly test which factors contribute to reduced context sensitivity, we conducted a regression analysis for each experiment, entering nonverbal IQ, structural language ability, and autism diagnosis as predictors. For both experiments structural language ability emerged as the only significant predictor. These convergent findings demonstrate that reduced sensitivity to context in language processing is linked to low structural language rather than ASD diagnosis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Family Language Policy and School Language Choice: Pathways to Bilingualism and Multilingualism in a Canadian Context

    Science.gov (United States)

    Slavkov, Nikolay

    2017-01-01

    This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…

  9. Rhythm in language acquisition.

    Science.gov (United States)

    Langus, Alan; Mehler, Jacques; Nespor, Marina

    2017-10-01

    Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The Impact of Biculturalism on Language and Literacy Development: Teaching Chinese English Language Learners

    Science.gov (United States)

    Palmer, Barbara C.; Chen, Chia-I; Chang, Sara; Leclere, Judith T.

    2006-01-01

    According to the 2000 United States Census, Americans age five and older who speak a language other than English at home grew 47 percent over the preceding decade. This group accounts for slightly less than one in five Americans (17.9%). Among the minority languages spoken in the United States, Asian-language speakers, including Chinese and other…

  11. Investigating the role of language attitudes for perception abilities using reaction times

    NARCIS (Netherlands)

    Schüppert, Anja; Gooskens, C.S.

    2011-01-01

    Danish and Swedish are mutually intelligible to a certain extent, but it has been shown that adult Danes confronted with spoken Swedish recognise more items than adult Swedes who are confronted with spoken Danish. However, this asymmetry was not confirmed for illiterate Danish and Swedish

  12. Textese and use of texting by children with typical language development and Specific Language Impairment

    NARCIS (Netherlands)

    Blom, E.; van Dijk, C.; Vasić, N.; van Witteloostuijn, M.; Avrutin, S.

    The purpose of this study was to investigate texting and textese, which is the special register used for sending brief text messages, across children with typical development (TD) and children with Specific Language Impairment (SLI). Using elicitation techniques, texting and spoken language messages

  13. Textese and use of texting by children with typical language development and Specific Language Impairment

    NARCIS (Netherlands)

    Blom, W.B.T.; van Dijk, Chantal; Vasic, Nada; van Witteloostuijn, Merel; Avrutin, S.

    2017-01-01

    The purpose of this study was to investigate texting and textese, which is the special register used for sending brief text messages, across children with typical development (TD) and children with Specific Language Impairment (SLI). Using elicitation techniques, texting and spoken language messages

  14. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    Science.gov (United States)

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  15. Germanic heritage languages in North America: Acquisition, attrition and change

    OpenAIRE

    Johannessen, Janne Bondi; Salmons, Joseph C.; Westergaard, Marit; Anderssen, Merete; Arnbjörnsdóttir, Birna; Allen, Brent; Pierce, Marc; Boas, Hans C.; Roesch, Karen; Brown, Joshua R.; Putnam, Michael; Åfarli, Tor A.; Newman, Zelda Kahan; Annear, Lucas; Speth, Kristin

    2015-01-01

    This book presents new empirical findings about Germanic heritage varieties spoken in North America: Dutch, German, Pennsylvania Dutch, Icelandic, Norwegian, Swedish, West Frisian and Yiddish, and varieties of English spoken both by heritage speakers and in communities after language shift. The volume focuses on three critical issues underlying the notion of ‘heritage language’: acquisition, attrition and change. The book offers theoretically-informed discussions of heritage language processe...

  16. First Steps to Endangered Language Documentation: The Kalasha Language, a Case Study

    Science.gov (United States)

    Mela-Athanasopoulou, Elizabeth

    2011-01-01

    The present paper based on extensive fieldwork D conducted on Kalasha, an endangered language spoken in the three small valleys in Chitral District of Northwestern Pakistan, exposes a spontaneous dialogue-based elicitation of linguistic material used for the description and documentation of the language. After a brief display of the basic typology…

  17. Factors that enhance English-speaking speech-language pathologists' transcription of Cantonese-speaking children's consonants.

    Science.gov (United States)

    Lockart, Rebekah; McLeod, Sharynne

    2013-08-01

    To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Task 1 (baseline) involved transcribing English words. In Task 2, students transcribed 25 words spoken by a Cantonese adult. An average of 59.1% consonants was transcribed correctly (72.9% when Cantonese-English transfer patterns were allowed). There was higher accuracy on shared English and Cantonese syllable-initial consonants /m,n,f,s,h,j,w,l/ and syllable-final consonants. In Task 3, students identified consonant errors and transcribed 100 words spoken by Cantonese-speaking children under 4 additive conditions: (1) baseline, (2) +adult model, (3) +information about Cantonese phonology, and (4) all variables (2 and 3 were counterbalanced). There was a significant improvement in the students' identification and transcription scores for conditions 2, 3, and 4, with a moderate effect size. Increased skill was not based on listeners' proficiency in speaking another language, perceived transcription skill, musicality, or confidence with multilingual clients. Speech-language pathology students, with no exposure to or specific training in Cantonese, have some skills to identify errors and transcribe Cantonese. Provision of a Cantonese-adult model and information about Cantonese phonology increased students' accuracy in transcribing Cantonese speech.

  18. Notes from the Field: Lolak--Another Moribund Language of Indonesia, with Supporting Audio

    Science.gov (United States)

    Lobel, Jason William; Paputungan, Ade Tatak

    2017-01-01

    This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…

  19. Physical aggression and language ability from 17 to 72 months: cross-lagged effects in a population sample.

    Directory of Open Access Journals (Sweden)

    Lisa-Christine Girard

    Full Text Available Does poor language ability in early childhood increase the likelihood of physical aggression or is language ability delayed by frequent physical aggression? This study examined the longitudinal associations between physical aggression and language ability from toddlerhood to early childhood in a population sample while controlling for parenting behaviours, non-verbal intellectual functioning, and children's sex.Children enrolled in the Quebec Longitudinal Study of Child Development (QLSCD (N = 2, 057 were assessed longitudinally from 17 to 72 months via parent reports and standardized assessments.The cross-lagged models revealed modest reciprocal associations between physical aggression and language performance from 17 to 41 months but not thereafter.Significant associations between physical aggression and poor language ability are minimal and limited to the period when physical aggression and language performance are both substantially increasing. During that period parenting behaviours may play an important role in supporting language ability while reducing the frequency of physical aggression. Further studies are needed that utilize multiple assessments of physical aggression, assess multiple domains of language abilities, and that examine the potential mediating role of parenting behaviours between 12 and 48 months.

  20. Language cannot be reduced to biology: perspectives from neuro-developmental disorders affecting language learning.

    Science.gov (United States)

    Vasanta, D

    2005-02-01

    The study of language knowledge guided by a purely biological perspective prioritizes the study of syntax. The essential process of syntax is recursion--the ability to generate an infinite array of expressions from a limited set of elements. Researchers working within the biological perspective argue that this ability is possible only because of an innately specified genetic makeup that is specific to human beings. Such a view of language knowledge may be fully justified in discussions on biolinguistics, and in evolutionary biology. However, it is grossly inadequate in understanding language-learning problems, particularly those experienced by children with neurodevelopmental disorders such as developmental dyslexia, Williams syndrome, specific language impairment and autism spectrum disorders. Specifically, syntax-centered definitions of language knowledge completely ignore certain crucial aspects of language learning and use, namely, that language is embedded in a social context; that the role of envrironmental triggering as a learning mechanism is grossly underestimated; that a considerable extent of visuo-spatial information accompanies speech in day-to-day communication; that the developmental process itself lies at the heart of knowledge acquisition; and that there is a tremendous variation in the orthographic systems associated with different languages. All these (socio-cultural) factors can influence the rate and quality of spoken and written language acquisition resulting in much variation in phenotypes associated with disorders known to have a genetic component. Delineation of such phenotypic variability requires inputs from varied disciplines such as neurobiology, neuropsychology, linguistics and communication disorders. In this paper, I discuss published research that questions cognitive modularity and emphasises the role of the environment for understanding linguistic capabilities of children with neuro-developmental disorders. The discussion pertains

  1. Higher Language Ability is Related to Angular Gyrus Activation Increase During Semantic Processing, Independent of Sentence Incongruency

    Science.gov (United States)

    Van Ettinger-Veenstra, Helene; McAllister, Anita; Lundberg, Peter; Karlsson, Thomas; Engström, Maria

    2016-01-01

    This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task—which tapped language comprehension and inference, and modulated sentence congruency—employing functional magnetic resonance imaging (fMRI). We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, there was a significant increase of activation in the inferior frontal gyrus (IFG) bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing, is opposed to what the neural efficiency hypothesis would predict. We can conclude that no evidence is found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation. PMID

  2. Higher language ability is related to angular gyrus activation increase during semantic processing, independent of sentence incongruency

    Directory of Open Access Journals (Sweden)

    Helene eVan Ettinger-Veenstra

    2016-03-01

    Full Text Available This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task - which tapped language comprehension and inference, and modulated sentence congruency - employing functional magnetic resonance imaging. We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, a significant increase of activation in the inferior frontal gyrus bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing is opposed to what the neural efficiency hypothesis would predict. We can conclude that there is no evidence found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation.

  3. Temporal Synchrony Detection and Associations with Language in Young Children with ASD

    Directory of Open Access Journals (Sweden)

    Elena Patten

    2014-01-01

    Full Text Available Temporally synchronous audio-visual stimuli serve to recruit attention and enhance learning, including language learning in infants. Although few studies have examined this effect on children with autism, it appears that the ability to detect temporal synchrony between auditory and visual stimuli may be impaired, particularly given social-linguistic stimuli delivered via oral movement and spoken language pairings. However, children with autism can detect audio-visual synchrony given nonsocial stimuli (objects dropping and their corresponding sounds. We tested whether preschool children with autism could detect audio-visual synchrony given video recordings of linguistic stimuli paired with movement of related toys in the absence of faces. As a group, children with autism demonstrated the ability to detect audio-visual synchrony. Further, the amount of time they attended to the synchronous condition was positively correlated with receptive language. Findings suggest that object manipulations may enhance multisensory processing in linguistic contexts. Moreover, associations between synchrony detection and language development suggest that better processing of multisensory stimuli may guide and direct attention to communicative events thus enhancing linguistic development.

  4. Word level language identification in online multilingual communication

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Dogruoz, A. Seza

    2013-01-01

    Multilingual speakers switch between languages in online and spoken communication. Analyses of large scale multilingual data require automatic language identification at the word level. For our experiments with multilingual online discussions, we first tag the language of individual words using

  5. At the Interface between Language Testing and Second Language Acquisition: Language Ability and Context of Learning

    Science.gov (United States)

    Gu, Lin

    2014-01-01

    This study investigated the relationship between latent components of academic English language ability and test takers' study-abroad and classroom learning experiences through a structural equation modeling approach in the context of TOEFL iBT® testing. Data from the TOEFL iBT public dataset were used. The results showed that test takers'…

  6. Language and Literacy: The Case of India.

    Science.gov (United States)

    Sridhar, Kamal K.

    Language and literacy issues in India are reviewed in terms of background, steps taken to combat illiteracy, and some problems associated with literacy. The following facts are noted: India has 106 languages spoken by more than 685 million people, there are several minor script systems, a major language has different dialects, a language may use…

  7. Gesture, sign, and language: The coming of age of sign language and gesture studies.

    Science.gov (United States)

    Goldin-Meadow, Susan; Brentari, Diane

    2017-01-01

    How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  8. Micro Language Planning and Cultural Renaissance in Botswana

    Science.gov (United States)

    Alimi, Modupe M.

    2016-01-01

    Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…

  9. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  10. Phonotactic spoken language identification with limited training data

    CSIR Research Space (South Africa)

    Peche, M

    2007-08-01

    Full Text Available The authors investigate the addition of a new language, for which limited resources are available, to a phonotactic language identification system. Two classes of approaches are studied: in the first class, only existing phonetic recognizers...

  11. Iconic Factors and Language Word Order

    Science.gov (United States)

    Moeser, Shannon Dawn

    1975-01-01

    College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)

  12. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  13. Language Ability and Adjustment: Western Expatriates in China

    DEFF Research Database (Denmark)

    Selmer, Jan

    2006-01-01

    was directed to Western business expatriates assigned to China. Controlling for the time expatriates had spent in China, results showed that their language ability had a positive association with their sociocultural adjustment. Not surprisingly, this positive relationship was strongest for interaction...... adjustment and weakest for work adjustment. The straightforward implications of these clear findings are discussed in detail....

  14. Predictors of spoken language development following pediatric cochlear implantation.

    Science.gov (United States)

    Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid

    2012-01-01

    Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple

  15. The Functional Organisation of the Fronto-Temporal Language System: Evidence from Syntactic and Semantic Ambiguity

    Science.gov (United States)

    Rodd, Jennifer M.; Longe, Olivia A.; Randall, Billi; Tyler, Lorraine K.

    2010-01-01

    Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities…

  16. Language and memory abilities of internationally adopted children from China: evidence for early age effects.

    Science.gov (United States)

    Delcenserie, Audrey; Genesee, Fred

    2014-11-01

    The goal of the present study was to examine if internationally adopted (IA) children from China (M = 10;8) adopted by French-speaking families exhibit lags in verbal memory in addition to lags in verbal abilities documented in previous studies (Gauthier & Genesee, 2011). Tests assessing verbal and non-verbal memory, language, non-verbal cognitive ability, and socio-emotional development were administered to thirty adoptees. Their results were compared to those of thirty non-adopted monolingual French-speaking children matched on age, gender, and socioeconomic status. The IA children scored significantly lower than the controls on language, verbal short-term memory, verbal working memory, and verbal long-term memory. No group differences were found on non-verbal memory, non-verbal cognitive ability, and socio-emotional development, suggesting language-specific difficulties. Despite extended exposure to French, adoptees may experience language difficulties due to limitations in verbal memory, possibly as a result of their delayed exposure to that language and/or attrition of the birth language.

  17. A grammar of Abui : A Papuan language of Alor

    NARCIS (Netherlands)

    Kratochvil, František

    2007-01-01

    This work contains the first comprehensive description of Abui, a language of the Trans New Guinea family spoken approximately by 16,000 speakers in the central part of the Alor Island in Eastern Indonesia. The description focuses on the northern dialect of Abui as spoken in the village

  18. Between Syntax and Pragmatics: The Causal Conjunction Protože in Spoken and Written Czech

    Czech Academy of Sciences Publication Activity Database

    Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra

    -, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf

  19. THE INFLUENCE OF LANGUAGE USE AND LANGUAGE ATTITUDE ON THE MAINTENANCE OF COMMUNITY LANGUAGES SPOKEN BY MIGRANT STUDENTS

    Directory of Open Access Journals (Sweden)

    Leni Amalia Suek

    2014-05-01

    Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.

  20. Early Childhood Stuttering and Electrophysiological Indices of Language Processing

    Science.gov (United States)

    Weber-Fox, Christine; Wray, Amanda Hampton; Arnold, Hayley

    2013-01-01

    We examined neural activity mediating semantic and syntactic processing in 27 preschool-age children who stutter (CWS) and 27 preschool-age children who do not stutter (CWNS) matched for age, nonverbal IQ and language abilities. All participants displayed language abilities and nonverbal IQ within the normal range. Event-related brain potentials (ERPs) were elicited while participants watched a cartoon video and heard naturally spoken sentences that were either correct or contained semantic or syntactic (phrase structure) violations. ERPs in CWS, compared to CWNS, were characterized by longer N400 peak latencies elicited by semantic processing. In the CWS, syntactic violations elicited greater negative amplitudes for the early time window (150–350 ms) over medial sites compared to CWNS. Additionally, the amplitude of the P600 elicited by syntactic violations relative to control words was significant over the left hemisphere for the CWNS but showed the reverse pattern in CWS, a robust effect only over the right hemisphere. Both groups of preschoolage children demonstrated marked and differential effects for neural processes elicited by semantic and phrase structure violations; however, a significant proportion of young CWS exhibit differences in the neural functions mediating language processing compared to CWNS despite normal language abilities. These results are the first to show that differences in event-related brain potentials reflecting language processing occur as early as the preschool years in CWS and provide the first evidence that atypical lateralization of hemispheric speech/language functions previously observed in the brains of adults who stutter begin to emerge near the onset of developmental stuttering. PMID:23773672

  1. Young children's communication and literacy: a qualitative study of language in the inclusive preschool.

    Science.gov (United States)

    Kliewer, C

    1995-06-01

    Interactive and literacy-based language use of young children within the context of an inclusive preschool classroom was explored. An interpretivist framework and qualitative research methods, including participant observation, were used to examine and analyze language in five preschool classes that were composed of children with and without disabilities. Children's language use included spoken, written, signed, and typed. Results showed complex communicative and literacy language use on the part of young children outside conventional adult perspectives. Also, children who used expressive methods other than speech were often left out of the contexts where spoken language was richest and most complex.

  2. Second language pragmatic ability: Individual differences according to environment

    Directory of Open Access Journals (Sweden)

    Lauren Wyner

    2015-12-01

    Full Text Available The aims of this paper are to review research literature on the role that the second language (L2 and foreign language (FL environments actually play in the development of learners’ target language (TL pragmatic ability, and also to speculate as to the extent to which individual factors can offset the advantages that learners may have by being in the L2 context while they are learning. The paper starts by defining pragmatics and by problematizing this definition. Then, attention is given to research literature dealing with the learning of pragmatics in an L2 context compared to an FL context. Next, studies on the role of pragmatic transfer are considered, with subsequent attention given to the literature on the incidence of pragmatic transfer in FL as opposed to L2 contexts. Finally, selected studies on the role of motivation in the development of pragmatic ability are examined. In the discussion section, a number of pedagogical suggestions are offered: the inclusion of pragmatics in teacher development, the use of authentic pragmatics materials, motivating learners to be more savvy about pragmatics, and supporting learners in accepting or challenging native-speaker norms. Suggestions as to further research in the field are also offered.

  3. Teaching natural language to computers

    OpenAIRE

    Corneli, Joseph; Corneli, Miriam

    2016-01-01

    "Natural Language," whether spoken and attended to by humans, or processed and generated by computers, requires networked structures that reflect creative processes in semantic, syntactic, phonetic, linguistic, social, emotional, and cultural modules. Being able to produce novel and useful behavior following repeated practice gets to the root of both artificial intelligence and human language. This paper investigates the modalities involved in language-like applications that computers -- and ...

  4. Effects of topiramate on language functions in newly diagnosed pediatric epileptic patients.

    Science.gov (United States)

    Kim, Sun Jun; Kim, Moon Yeon; Choi, Yoon Mi; Song, Mi Kyoung

    2014-09-01

    The aim of this study was to characterize the effects of topiramate on language functions in newly diagnosed pediatric epileptic patients. Thirty-eight newly diagnosed epileptic patients were assessed using standard language tests. Data were collected before and after beginning topiramate during which time a monotherapy treatment regimen was maintained. Language tests included the Test of Language Problem Solving Abilities, a Korean version of the Peabody Picture Vocabulary Test. We used language tests in the Korean version because all the patients were spoken Korean exclusively in their families. All the language parameters of Test of Language Problem Solving Abilities worsened after initiation of topiramate (determine cause, 13.2 ± 4.8 to 11.2 ± 4.3; problem solving, 14.8 ± 6.0 to 12.8 ± 5.0; predicting, 9.8 ± 3.6 to 8.8 ± 4.6). Patients given topiramate exhibited a shortened mean length of utterance in words during response (determine cause, 4.8 ± 0.9 to 4.3 ± 0.7; making inference, 4.5 ± 0.8 to 4.1 ± 1.1; predicting, 5.2 ± 1.0 to 4.7 ± 0.6; P language of patients after taking topiramate (95.4 ± 20.4 to 100.8 ± 19.1). Our data suggest that topiramate may have negative effects on problem-solving abilities in children. We recommend performing language tests should be considered in children being treated with topiramate. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Une Progression dans la Strategie Pedagogique pour assurer la Construction de Langage Oral L'ecole Maternelle [A Progression in Teaching Strategies to Ensure Oral Language Building in Nursery School].

    Science.gov (United States)

    Durand, C.

    1997-01-01

    Summarizes progressions between 2 and 6 years of age in children's power of concentration, ability to express ideas, build logical relationships, structure spoken words, and play with the semantic, phonetic, syntactical, and morphological aspects of oral language. Notes that the progression depends on the educator's interaction with the child.…

  6. E-cigarette use and disparities by race, citizenship status and language among adolescents.

    Science.gov (United States)

    Alcalá, Héctor E; Albert, Stephanie L; Ortega, Alexander N

    2016-06-01

    E-cigarette use among adolescents is on the rise in the U.S. However, limited attention has been given to examining the role of race, citizenship status and language spoken at home in shaping e-cigarette use behavior. Data are from the 2014 Adolescent California Health Interview Survey, which interviewed 1052 adolescents ages 12-17. Lifetime e-cigarette use was examined by sociodemographic characteristics. Separate logistic regression models predicted odds of ever-smoking e-cigarettes from race, citizenship status and language spoken at home. Sociodemographic characteristics were then added to these models as control variables and a model with all three predictors and controls was run. Similar models were run with conventional smoking as an outcome. 10.3% of adolescents ever used e-cigarettes. E-cigarette use was higher among ever-smokers of conventional cigarettes, individuals above 200% of the Federal Poverty Level, US citizens and those who spoke English-only at home. Multivariate analyses demonstrated that citizenship status and language spoken at home were associated with lifetime e-cigarette use, after accounting for control variables. Only citizenship status was associated with e-cigarette use, when controls variables race and language spoken at home were all in the same model. Ever use of e-cigarettes in this study was higher than previously reported national estimates. Action is needed to curb the use of e-cigarettes among adolescents. Differences in lifetime e-cigarette use by citizenship status and language spoken at home suggest that less acculturated individuals use e-cigarettes at lower rates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Language structure is partly determined by social structure.

    Directory of Open Access Journals (Sweden)

    Gary Lupyan

    Full Text Available BACKGROUND: Languages differ greatly both in their syntactic and morphological systems and in the social environments in which they exist. We challenge the view that language grammars are unrelated to social environments in which they are learned and used. METHODOLOGY/PRINCIPAL FINDINGS: We conducted a statistical analysis of >2,000 languages using a combination of demographic sources and the World Atlas of Language Structures--a database of structural language properties. We found strong relationships between linguistic factors related to morphological complexity, and demographic/socio-historical factors such as the number of language users, geographic spread, and degree of language contact. The analyses suggest that languages spoken by large groups have simpler inflectional morphology than languages spoken by smaller groups as measured on a variety of factors such as case systems and complexity of conjugations. Additionally, languages spoken by large groups are much more likely to use lexical strategies in place of inflectional morphology to encode evidentiality, negation, aspect, and possession. Our findings indicate that just as biological organisms are shaped by ecological niches, language structures appear to adapt to the environment (niche in which they are being learned and used. As adults learn a language, features that are difficult for them to acquire, are less likely to be passed on to subsequent learners. Languages used for communication in large groups that include adult learners appear to have been subjected to such selection. Conversely, the morphological complexity common to languages used in small groups increases redundancy which may facilitate language learning by infants. CONCLUSIONS/SIGNIFICANCE: We hypothesize that language structures are subjected to different evolutionary pressures in different social environments. Just as biological organisms are shaped by ecological niches, language structures appear to adapt to the

  8. Schools and Languages in India.

    Science.gov (United States)

    Harrison, Brian

    1968-01-01

    A brief review of Indian education focuses on special problems caused by overcrowded schools, insufficient funding, and the status of education itself in the Indian social structure. Language instruction in India, a complex issue due largely to the numerous official languages currently spoken, is commented on with special reference to the problem…

  9. Relations among the Home Language and Literacy Environment and Children's Language Abilities: A Study of Head Start Dual Language Learners and Their Mothers

    Science.gov (United States)

    Lewis, Kandia; Sandilos, Lia E.; Hammer, Carol Scheffner; Sawyer, Brook E.; Méndez, Lucía I.

    2016-01-01

    Research Findings: This study explored the relations between Spanish-English dual language learner (DLL) children's home language and literacy experiences and their expressive vocabulary and oral comprehension abilities in Spanish and in English. Data from Spanish-English mothers of 93 preschool-age Head Start children who resided in central…

  10. Australian Aboriginal Deaf People and Aboriginal Sign Language

    Science.gov (United States)

    Power, Des

    2013-01-01

    Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…

  11. Language Planning for Venezuela: The Role of English.

    Science.gov (United States)

    Kelsey, Irving; Serrano, Jose

    A rationale for teaching foreign languages in Venezuelan schools is discussed. An included sociolinguistic profile of Venezuela indicates that Spanish is the sole language of internal communication needs. Other languages spoken in Venezuela serve primarily a group function among the immigrant and indigenous communities. However, the teaching of…

  12. 125 The Fading Phase of Igbo Language and Culture: Path to its ...

    African Journals Online (AJOL)

    Tracie1

    favour of foreign language (and culture). They also ... native language, and children are unable to learn a language not spoken ... shielding them off their mother tongue”. ..... the effect endangered language has on the existence of the owners.

  13. MORPOHOLOGICAL POS TAGGING IN ORAL LANGUAGE CORPUS: CHALLENGES FOR AELIUS

    Directory of Open Access Journals (Sweden)

    Gabriel de Ávila Othero

    2014-12-01

    Full Text Available In this paper, we present the results of our work with automatic morphological annotation of excerpts from a corpus of spoken language – belonging to the VARSUL project – using the free morphosyntatic tagger Aelius. We present 20 texts containing 154,530 words, annotated automatically and corrected manually. This paper presents the tagger Aelius and our work of manual review of the texts, as well as our suggestions for improvements of the tool, concerning aspects of oral texts. We verify the performance of morphosyntactic tagging a spoken language corpus, an unprecedented challenge for the tagger. Based on the errors of the tagger, we try to infer certain patterns of annotation to overcome limitations presented by the program, and we propose suggestions for implementations in order to allow Aelius to tag spoken language corpora in a more effective way, specially treating cases such as interjections, apheresis, onomatopeia and conversational markers.

  14. Memory Abilities in Children with Mathematical Difficulties: Comorbid Language Difficulties Matter

    Science.gov (United States)

    Reimann, Giselle; Gut, Janine; Frischknecht, Marie-Claire; Grob, Alexander

    2013-01-01

    The present study investigated cognitive abilities in children with difficulties in mathematics only (n = 48, M = 8 years and 5 months), combined mathematical and language difficulty (n = 27, M = 8 years and 1 month) and controls (n = 783, M = 7 years and 11 months). Cognitive abilities were measured with seven subtests, tapping visual perception,…

  15. Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments

    Directory of Open Access Journals (Sweden)

    Anderson Salena Sampson

    2017-12-01

    Full Text Available While large-scale language and writing assessments benefit from a wealth of literature on the reliability and validity of specific tests and rating procedures, there is comparatively less literature that explores the specific language of second language writing rubrics. This paper provides an analysis of the language of performance descriptors for the public versions of the TOEFL and IELTS writing assessment rubrics, with a focus on linguistic agency encoded by agentive verbs and language of ability encoded by modal verbs can and cannot. While the IELTS rubrics feature more agentive verbs than the TOEFL rubrics, both pairs of rubrics feature uneven syntax across the band or score descriptors with either more agentive verbs for the highest scores, more nominalization for the lowest scores, or language of ability exclusively in the lowest scores. These patterns mirror similar patterns in the language of college-level classroom-based writing rubrics, but they differ from patterns seen in performance descriptors for some large-scale admissions tests. It is argued that the lack of syntactic congruity across performance descriptors in the IELTS and TOEFL rubrics may reflect a bias in how actual student performances at different levels are characterized.

  16. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  17. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    Science.gov (United States)

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  18. Structural borrowing: The case of Kenyan Sign Language (KSL) and ...

    African Journals Online (AJOL)

    Kenyan Sign Language (KSL) is a visual gestural language used by members of the deaf community in Kenya. Kiswahili on the other hand is a Bantu language that is used as the national language of Kenya. The two are world's apart, one being a spoken language and the other a signed language and thus their “… basic ...

  19. Syntactic priming in American Sign Language.

    Science.gov (United States)

    Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I

    2015-01-01

    Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.

  20. Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity

    Science.gov (United States)

    Wong, Miranda Kit-Yi; So, Wing Chee

    2016-01-01

    This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…

  1. Chinese EFL teachers' knowledge of basic language constructs and their self-perceived teaching abilities.

    Science.gov (United States)

    Zhao, Jing; Joshi, R Malatesha; Dixon, L Quentin; Huang, Liyan

    2016-04-01

    The present study examined the knowledge and skills of basic language constructs among elementary school teachers who were teaching English as a Foreign Language (EFL) in China. Six hundred and thirty in-service teachers completed the adapted Reading Teacher Knowledge Survey. Survey results showed that English teachers' self-perceived ability to teach vocabulary was the highest and self-perceived ability to teach reading to struggling readers was the lowest. Morphological knowledge was positively correlated with teachers' self-perceived teaching abilities, and it contributed unique variance even after controlling for the effects of ultimate educational attainment and years of teaching. Findings suggest that elementary school EFL teachers in China, on average, were able to display implicit skills related to certain basic language constructs, but less able to demonstrate explicit knowledge of other skills, especially sub-lexical units (e.g., phonemic awareness and morphemes). The high self-perceived ability of teaching vocabulary and high scores on syllable counting reflected the focus on larger units in the English reading curriculum.

  2. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.

    2014-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  3. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.

    2015-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  4. Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech

    Science.gov (United States)

    Furui, Sadaoki

    This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.

  5. Inflectional and derivational morphological spelling abilities of children with Specific Language Impairment.

    Science.gov (United States)

    Critten, Sarah; Connelly, Vincent; Dockrell, Julie E; Walter, Kirsty

    2014-01-01

    Children with Specific Language Impairment (SLI) are known to have difficulties with spelling but the factors that underpin these difficulties, are a matter of debate. The present study investigated the impact of oral language and literacy on the bound morpheme spelling abilities of children with SLI. Thirty-three children with SLI (9-10 years) and two control groups, one matched for chronological age (CA) and one for language and spelling age (LA) (aged 6-8 years) were given dictated spelling tasks of 24 words containing inflectional morphemes and 18 words containing derivational morphemes. There were no significant differences between the SLI group and their LA matches in accuracy or error patterns for inflectional morphemes. By contrast when spelling derivational morphemes the SLI group was less accurate and made proportionately more omissions and phonologically implausible errors than both control groups. Spelling accuracy was associated with phonological awareness and reading; reading performance significantly predicted the ability to spell both inflectional and derivational morphemes. The particular difficulties experienced by the children with SLI for derivational morphemes are considered in relation to reading and oral language.

  6. Beyond Languages, beyond Modalities: Transforming the Study of Semiotic Repertoires

    Science.gov (United States)

    Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina

    2017-01-01

    This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…

  7. Learning a Minoritized Language in a Majority Language Context: Student Agency and the Creation of Micro-Immersion Contexts

    Science.gov (United States)

    DePalma, Renée

    2015-01-01

    This study investigates the self-reported experiences of students participating in a Galician language and culture course. Galician, a language historically spoken in northwestern Spain, has been losing ground with respect to Spanish, particularly in urban areas and among the younger generations. The research specifically focuses on informal…

  8. Language Outcomes in Children Who Are Deaf and Hard of Hearing: The Role of Language Ability Before Hearing Aid Intervention.

    Science.gov (United States)

    Daub, Olivia; Bagatto, Marlene P; Johnson, Andrew M; Cardy, Janis Oram

    2017-11-09

    Early auditory experiences are fundamental in infant language acquisition. Research consistently demonstrates the benefits of early intervention (i.e., hearing aids) to language outcomes in children who are deaf and hard of hearing. The nature of these benefits and their relation with prefitting development are, however, not well understood. This study examined Ontario Infant Hearing Program birth cohorts to explore predictors of performance on the Preschool Language Scale-Fourth Edition at the time of (N = 47) and after (N = 19) initial hearing aid intervention. Regression analyses revealed that, before the hearing aid fitting, severity of hearing loss negatively predicted 19% and 10% of the variance in auditory comprehension and expressive communication, respectively. After hearing aid fitting, children's standard scores on language measures remained stable, but they made significant improvement in their progress values, which represent individual skills acquired on the test, rather than standing relative to same-age peers. Magnitude of change in progress values was predicted by a negative interaction of prefitting language ability and severity of hearing loss for the Auditory Comprehension scale. These findings highlight the importance of considering a child's prefitting language ability in interpreting eventual language outcomes. Possible mechanisms of hearing aid benefit are discussed. https://doi.org/10.23641/asha.5538868.

  9. Translingualism and Second Language Acquisition: Language Ideologies of Gaelic Medium Education Teachers in a Linguistically Liminal Setting

    Science.gov (United States)

    Knipe, John

    2017-01-01

    Scottish Gaelic, among the nearly 7,000 languages spoken in the world today, is endangered. In the 1980s the Gaelic Medium Education (GME) movement emerged with an emphasis on teaching students all subjects via this ancient tongue with the hope of revitalizing the language. Concomitantly, many linguists have called for problematizing traditional…

  10. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Finding words in a language that allows words without vowels.

    Science.gov (United States)

    El Aissati, Abder; McQueen, James M; Cutler, Anne

    2012-07-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the constraint would be counter-productive in certain languages that allow stand-alone vowelless open-class words. One such language is Berber (where t is indeed a word). Berber listeners here detected words affixed to nonsense contexts with or without vowels. Length effects seen in other languages replicated in Berber, but in contrast to prior findings, word detection was not hindered by vowelless contexts. When words can be vowelless, otherwise universal constraints disfavoring vowelless words do not feature in spoken-word recognition. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Speed of Language Comprehension at 18 Months Old Predicts School-Relevant Outcomes at 54 Months Old in Children Born Preterm.

    Science.gov (United States)

    Marchman, Virginia A; Loi, Elizabeth C; Adams, Katherine A; Ashland, Melanie; Fernald, Anne; Feldman, Heidi M

    2018-04-01

    Identifying which preterm (PT) children are at increased risk of language and learning differences increases opportunities for participation in interventions that improve outcomes. Speed in spoken language comprehension at early stages of language development requires information processing skills that may form the foundation for later language and school-relevant skills. In children born full-term, speed of comprehending words in an eye-tracking task at 2 years old predicted language and nonverbal cognition at 8 years old. Here, we explore the extent to which speed of language comprehension at 1.5 years old predicts both verbal and nonverbal outcomes at 4.5 years old in children born PT. Participants were children born PT (n = 47; ≤32 weeks gestation). Children were tested in the "looking-while-listening" task at 18 months old, adjusted for prematurity, to generate a measure of speed of language comprehension. Parent report and direct assessments of language were also administered. Children were later retested on a test battery of school-relevant skills at 4.5 years old. Speed of language comprehension at 18 months old predicted significant unique variance (12%-31%) in receptive vocabulary, global language abilities, and nonverbal intelligence quotient (IQ) at 4.5 years, controlling for socioeconomic status, gestational age, and medical complications of PT birth. Speed of language comprehension remained uniquely predictive (5%-12%) when also controlling for children's language skills at 18 months old. Individual differences in speed of spoken language comprehension may serve as a marker for neuropsychological processes that are critical for the development of school-relevant linguistic skills and nonverbal IQ in children born PT.

  13. Speech, gesture and the origins of language

    NARCIS (Netherlands)

    Levelt, W.J.M.

    2004-01-01

    During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt’s monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in

  14. Language balance and switching ability in children acquiring English as a second language.

    Science.gov (United States)

    Goriot, Claire; Broersma, Mirjam; McQueen, James M; Unsworth, Sharon; van Hout, Roeland

    2018-09-01

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4-5, 8-9, and 11-12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch-English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Guest Comment: Universal Language Requirement.

    Science.gov (United States)

    Sherwood, Bruce Arne

    1979-01-01

    Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)

  16. A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    Directory of Open Access Journals (Sweden)

    Michael Inspector

    Full Text Available OBJECTIVES: Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. EXPERIMENTAL DESIGN: Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i All words presented in a set flat monotonous pitch contour (ii Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii Each word had a different arbitrary pitch contour in each of its repetition. PRINCIPAL FINDINGS: The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41, temporal areas (BA 21 22 bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. CONCLUSIONS: Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.

  17. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    Science.gov (United States)

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  18. El Espanol como Idioma Universal (Spanish as a Universal Language)

    Science.gov (United States)

    Mijares, Jose

    1977-01-01

    A proposal to transform Spanish into a universal language because it possesses the prerequisites: it is a living language, spoken in several countries; it is a natural language; and it uses the ordinary alphabet. Details on simplification and standardization are given. (Text is in Spanish.) (AMH)

  19. Spectrotemporal processing drives fast access to memory traces for spoken words.

    Science.gov (United States)

    Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C

    2012-05-01

    The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. COMPARATIVE ANALYSIS OF EXISTING INTENSIVE METHODS OF TEACHING FOREIGN LANGUAGES

    Directory of Open Access Journals (Sweden)

    Maria Mytnyk

    2016-12-01

    Full Text Available The article deals with the study and analysis of comparable existing intensive methods of teaching foreign languages. This work is carried out to identify the positive and negative aspects of intensive methods of teaching foreign languages. The author traces the idea of rational organization and intensification of teaching foreign languages from their inception to the moment of their preparation in an integrated system. advantages and disadvantages of the most popular methods of intensive training also analyzed the characteristic of different historical periods, namely cugestopedichny method G. Lozanov method activation of reserve possibilities of students G. Kitaygorodskoy, emotional-semantic method I. Schechter, an intensive course of learning a foreign language L. Gegechkori , sugestokibernetichny integral method of accelerated learning a foreign language B. Petrusinskogo, a crash course in the study of spoken language by immersion A. Plesnevich. Analyzed the principles of learning and the role of each method in the development of methods of intensive foreign language training. The author identified a number of advantages and disadvantages of intensive methods of teaching foreign languages: 1 the assimilation of a large number of linguistic, lexical and grammatical units; 2 active use of acquired knowledge, skills and abilities in the practice of oral speech communication in a foreign language; 3 the ability to use language material resulting not only in his speech, but also in understanding the interlocutor; 4 overcoming psychological barriers, including fear of the possibility of making a mistake; 5 high efficiency and fast learning; 6 too much new language material that is presented; 7 training of oral forms of communication; 8 decline of grammatical units and models.

  1. Language and Culture in the Multiethnic Community: Spoken Language Assessment.

    Science.gov (United States)

    Matluck, Joseph H.; Mace-Matluck, Betty J.

    This paper discusses the sociolinguistic problems inherent in multilingual testing, and the accompanying dangers of cultural bias in either the visuals or the language used in a given test. The first section discusses English-speaking Americans' perception of foreign speakers in terms of: (1) physical features; (2) speech, specifically vocabulary,…

  2. Individual language experience modulates rapid formation of cortical memory circuits for novel words

    Science.gov (United States)

    Kimppa, Lilli; Kujala, Teija; Shtyrov, Yury

    2016-01-01

    Mastering multiple languages is an increasingly important ability in the modern world; furthermore, multilingualism may affect human learning abilities. Here, we test how the brain’s capacity to rapidly form new representations for spoken words is affected by prior individual experience in non-native language acquisition. Formation of new word memory traces is reflected in a neurophysiological response increase during a short exposure to novel lexicon. Therefore, we recorded changes in electrophysiological responses to phonologically native and non-native novel word-forms during a perceptual learning session, in which novel stimuli were repetitively presented to healthy adults in either ignore or attend conditions. We found that larger number of previously acquired languages and earlier average age of acquisition (AoA) predicted greater response increase to novel non-native word-forms. This suggests that early and extensive language experience is associated with greater neural flexibility for acquiring novel words with unfamiliar phonology. Conversely, later AoA was associated with a stronger response increase for phonologically native novel word-forms, indicating better tuning of neural linguistic circuits to native phonology. The results suggest that individual language experience has a strong effect on the neural mechanisms of word learning, and that it interacts with the phonological familiarity of the novel lexicon. PMID:27444206

  3. Right Hemisphere Grey Matter Volume and Language Functions in Stroke Aphasia

    Directory of Open Access Journals (Sweden)

    Sladjana Lukic

    2017-01-01

    Full Text Available The role of the right hemisphere (RH in recovery from aphasia is incompletely understood. The present study quantified RH grey matter (GM volume in individuals with chronic stroke-induced aphasia and cognitively healthy people using voxel-based morphometry. We compared group differences in GM volume in the entire RH and in RH regions-of-interest. Given that lesion site is a critical source of heterogeneity associated with poststroke language ability, we used voxel-based lesion symptom mapping (VLSM to examine the relation between lesion site and language performance in the aphasic participants. Finally, using results derived from the VLSM as a covariate, we evaluated the relation between GM volume in the RH and language ability across domains, including comprehension and production processes both at the word and sentence levels and across spoken and written modalities. Between-subject comparisons showed that GM volume in the RH SMA was reduced in the aphasic group compared to the healthy controls. We also found that, for the aphasic group, increased RH volume in the MTG and the SMA was associated with better language comprehension and production scores, respectively. These data suggest that the RH may support functions previously performed by LH regions and have important implications for understanding poststroke reorganization.

  4. Beyond languages, beyond modalities: transforming the study of semiotic repertoires : Introduction

    NARCIS (Netherlands)

    Spotti, Max

    2017-01-01

    This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the

  5. Syntactic priming in American Sign Language.

    Directory of Open Access Journals (Sweden)

    Matthew L Hall

    Full Text Available Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL. Experiment 1 shows that second language (L2 signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect. Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming operates similarly in sign and speech.

  6. Three languages from America in contact with Spanish

    NARCIS (Netherlands)

    Bakker, D.; Sakel, J.; Stolz, T.

    2012-01-01

    Long before Europeans reached the American shores for the first time, and forced their cultures upon the indigenous population, including their languages, a great many other languages were spoken on that continent. These dated back to the original discoverers of America, who probably came from the

  7. Mutual intelligibility between closely related language in Europe.

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent; Golubovic, Jelena; Schüppert, Anja; Swarte, Femke; Voigt, Stefanie

    2018-01-01

    By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual

  8. Predictors of Spoken Language Development Following Pediatric Cochlear Implantation

    NARCIS (Netherlands)

    Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters

    2012-01-01

    Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to

  9. Shared language:Towards more effective communication.

    Science.gov (United States)

    Thomas, Joyce; McDonagh, Deana

    2013-01-01

    The ability to communicate to others and express ourselves is a basic human need. As we develop our understanding of the world, based on our upbringing, education and so on, our perspective and the way we communicate can differ from those around us. Engaging and interacting with others is a critical part of healthy living. It is the responsibility of the individual to ensure that they are understood in the way they intended.Shared language refers to people developing understanding amongst themselves based on language (e.g. spoken, text) to help them communicate more effectively. The key to understanding language is to first notice and be mindful of your language. Developing a shared language is an ongoing process that requires intention and time, which results in better understanding.Shared language is critical to collaboration, and collaboration is critical to business and education. With whom and how many people do you connect? Your 'shared language' makes a difference in the world. So, how do we successfully do this? This paper shares several strategies.Your sphere of influence will carry forward what and how you are communicating. Developing and nurturing a shared language is an essential element to enhance communication and collaboration whether it is simply between partners or across the larger community of business and customers. Constant awareness and education is required to maintain the shared language. We are living in an increasingly smaller global community. Business is built on relationships. If you invest in developing shared language, your relationships and your business will thrive.

  10. I Feel You: The Design and Evaluation of a Domotic Affect-Sensitive Spoken Conversational Agent

    Directory of Open Access Journals (Sweden)

    Juan Manuel Montero

    2013-08-01

    Full Text Available We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO. In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA. NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users’ frustrations and, ultimately, improving their satisfaction.

  11. "We call it Springbok-German!": language contact in the German communities in South Africa.

    OpenAIRE

    Franke, Katharina

    2017-01-01

    Varieties of German are spoken all over the world, some of which have been maintained for prolonged periods of time. As a result, these transplanted varieties often show traces of the ongoing language contact as specific to their particular context. This thesis explores one such transplanted German language variety – Springbok- German – as spoken by a small subset of German Lutherans in South Africa. Specifically, this study takes as its focus eight rural German communities acr...

  12. What sign language creation teaches us about language.

    Science.gov (United States)

    Brentari, Diane; Coppola, Marie

    2013-03-01

    How do languages emerge? What are the necessary ingredients and circumstances that permit new languages to form? Various researchers within the disciplines of primatology, anthropology, psychology, and linguistics have offered different answers to this question depending on their perspective. Language acquisition, language evolution, primate communication, and the study of spoken varieties of pidgin and creoles address these issues, but in this article we describe a relatively new and important area that contributes to our understanding of language creation and emergence. Three types of communication systems that use the hands and body to communicate will be the focus of this article: gesture, homesign systems, and sign languages. The focus of this article is to explain why mapping the path from gesture to homesign to sign language has become an important research topic for understanding language emergence, not only for the field of sign languages, but also for language in general. WIREs Cogn Sci 2013, 4:201-211. doi: 10.1002/wcs.1212 For further resources related to this article, please visit the WIREs website. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Social Class and Language Attitudes in Hong Kong

    Science.gov (United States)

    Lai, Mee Ling

    2010-01-01

    This article examines the relation between social class and language attitudes through a triangulated study that analyses the attitudes of 836 secondary school students from different socioeconomic backgrounds toward the 3 official spoken languages used in postcolonial Hong Kong (HK; i.e., Cantonese, English, and Putonghua). The respondents were…

  14. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  15. Task-Oriented Spoken Dialog System for Second-Language Learning

    Science.gov (United States)

    Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…

  16. How Facebook Can Revitalise Local Languages: Lessons from Bali

    Science.gov (United States)

    Stern, Alissa Joy

    2017-01-01

    For a language to survive, it must be spoken and passed down to the next generation. But how can we engage teenagers--so crucial for language transmission--to use and value their local tongue when they are bombarded by pressures from outside and from within their society to only speak national and international languages? This paper analyses the…

  17. Universality versus language-specificity in listening to running speech

    NARCIS (Netherlands)

    Cutler, A.; Demuth, K.; McQueen, J.M.

    2002-01-01

    Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) which leave a portion of the input stranded (here, b). Results from European languages suggest

  18. The relation between working memory and language comprehension in signers and speakers.

    Science.gov (United States)

    Emmorey, Karen; Giezen, Marcel R; Petrich, Jennifer A F; Spurgeon, Erin; O'Grady Farnady, Lucinda

    2017-06-01

    This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Sensitivity to audio-visual synchrony and its relation to language abilities in children with and without ASD.

    Science.gov (United States)

    Righi, Giulia; Tenenbaum, Elena J; McCormick, Carolyn; Blossom, Megan; Amso, Dima; Sheinkopf, Stephen J

    2018-04-01

    Autism Spectrum Disorder (ASD) is often accompanied by deficits in speech and language processing. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to examine whether young children with ASD show reduced sensitivity to temporal asynchronies in a speech processing task when compared to typically developing controls, and to examine how this sensitivity might relate to language proficiency. Using automated eye tracking methods, we found that children with ASD failed to demonstrate sensitivity to asynchronies of 0.3s, 0.6s, or 1.0s between a video of a woman speaking and the corresponding audio track. In contrast, typically developing children who were language-matched to the ASD group, were sensitive to both 0.6s and 1.0s asynchronies. We also demonstrated that individual differences in sensitivity to audiovisual asynchronies and individual differences in orientation to relevant facial features were both correlated with scores on a standardized measure of language abilities. Results are discussed in the context of attention to visual language and audio-visual processing as potential precursors to language impairment in ASD. Autism Res 2018, 11: 645-653. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to explore whether children with ASD process audio-visual synchrony in ways comparable to their typically developing peers, and the relationship between preference for synchrony and language ability. Results showed that

  20. Spoken language and everyday functioning in 5-year-old children using hearing aids or cochlear implants.

    Science.gov (United States)

    Cupples, Linda; Ching, Teresa Yc; Button, Laura; Seeto, Mark; Zhang, Vicky; Whitfield, Jessica; Gunnourie, Miriam; Martin, Louise; Marnane, Vivienne

    2017-09-12

    This study investigated the factors influencing 5-year language, speech and everyday functioning of children with congenital hearing loss. Standardised tests including PLS-4, PPVT-4 and DEAP were directly administered to children. Parent reports on language (CDI) and everyday functioning (PEACH) were collected. Regression analyses were conducted to examine the influence of a range of demographic variables on outcomes. Participants were 339 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children's average receptive and expressive language scores were approximately 1 SD below the mean of typically developing children, and scores on speech production and everyday functioning were more than 1 SD below. Regression models accounted for 70-23% of variance in scores across different tests. Earlier CI switch-on and higher non-verbal ability were associated with better outcomes in most domains. Earlier HA fitting and use of oral communication were associated with better outcomes on directly administered language assessments. Severity of hearing loss and maternal education influenced outcomes of children with HAs. The presence of additional disabilities affected outcomes of children with CIs. The findings provide strong evidence for the benefits of early HA fitting and early CI for improving children's outcomes.

  1. Making a Difference: Language Teaching for Intercultural and International Dialogue

    Science.gov (United States)

    Byram, Michael; Wagner, Manuela

    2018-01-01

    Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…

  2. Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy

    2016-01-01

    Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…

  3. Dilemmatic Aspects of Language Policies in a Trilingual Preschool Group

    Science.gov (United States)

    Puskás, Tünde; Björk-Willén, Polly

    2017-01-01

    This article explores dilemmatic aspects of language policies in a preschool group in which three languages (Swedish, Romani and Arabic) are spoken on an everyday basis. The article highlights the interplay between policy decisions on the societal level, the teachers' interpretations of these policies, as well as language practices on the micro…

  4. Language Planning for the 21st Century: Revisiting Bilingual Language Policy for Deaf Children

    NARCIS (Netherlands)

    Knoors, H.E.T.; Marschark, M.

    2012-01-01

    For over 25 years in some countries and more recently in others, bilingual education involving sign language and the written/spoken vernacular has been considered an essential educational intervention for deaf children. With the recent growth in universal newborn hearing screening and technological

  5. Lexical access in sign language: a computational model.

    Science.gov (United States)

    Caselli, Naomi K; Cohen-Goldberg, Ariel M

    2014-01-01

    PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  6. Lexical access in sign language: A computational model

    Directory of Open Access Journals (Sweden)

    Naomi Kenney Caselli

    2014-05-01

    Full Text Available Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012 presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012, and show that if this architecture is elaborated to incorporate relatively minor facts about either 1 the time course of sign perception or 2 the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  7. Does a child's language ability affect the correspondence between parent and teacher ratings of ADHD symptoms?

    Science.gov (United States)

    Gooch, Debbie; Maydew, Harriet; Sears, Claire; Norbury, Courtenay Frazier

    2017-04-05

    Rating scales are often used to identify children with potential Attention-Deficit/Hyperactivity Disorder (ADHD), yet there are frequently discrepancies between informants which may be moderated by child characteristics. The current study asked whether correspondence between parent and teacher ratings on the Strengths and Weakness of ADHD symptoms and Normal behaviour scale (SWAN) varied systematically with child language ability. Parent and teacher SWAN questionnaires were returned for 200 children (aged 61-81 months); 106 had low language ability (LL) and 94 had typically developing language (TL). After exploring informant correspondence (using Pearson correlation) and the discrepancy between raters, we report inter-class correlation coefficients, to assess inter-rater reliability, and Cohen's kappa, to assess agreement regarding possible ADHD caseness. Correlations between informant ratings on the SWAN were moderate. Children with LL were rated as having increased inattention and hyperactivity relative to children with TL; teachers, however, rated children with LL as having more inattention than parents. Inter-rater reliability of the SWAN was good and there were no systematic differences between the LL and TL groups. Case agreement between parent and teachers was fair; this varied by language group with poorer case agreement for children with LL. Children's language abilities affect the discrepancy between informant ratings of ADHD symptomatology and the agreement between parents and teachers regarding potential ADHD caseness. The assessment of children's core language ability would be a beneficial addition to the ADHD diagnostic process.

  8. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  9. Mutual intelligibility between closely related languages in Europe

    NARCIS (Netherlands)

    Gooskens, C.; Heuven, van V.J.J.P.; Golubović, J.; Schüppert, A.; Swarte, F.; Voigt, S.

    2017-01-01

    By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual

  10. Individual differences in language ability are related to variation in word recognition, not speech perception: Evidence from eye-movements

    Science.gov (United States)

    McMurray, Bob; Munson, Cheyenne; Tomblin, J. Bruce

    2013-01-01

    Purpose This study examined speech perception deficits associated with individual differences in language ability contrasting auditory, phonological or lexical accounts by asking if lexical competition is differentially sensitive to fine-grained acoustic variation. Methods 74 adolescents with a range of language abilities (including 35 impaired) participated in an experiment based on McMurray, Tanenhaus and Aslin (2002). Participants heard tokens from six 9-step Voice Onset Time (VOT) continua spanning two words (beach/peach, beak/peak, etc), while viewing a screen containing pictures of those words and two unrelated objects. Participants selected the referent while eye-movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Results Eye-movements were sensitive to within-category VOT differences: as VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Conclusions Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences phonological categorization or auditory abilities. PMID:24687026

  11. Individual differences in language ability are related to variation in word recognition, not speech perception: evidence from eye movements.

    Science.gov (United States)

    McMurray, Bob; Munson, Cheyenne; Tomblin, J Bruce

    2014-08-01

    The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Adolescents with a range of language abilities (N = 74, including 35 impaired) participated in an experiment based on McMurray, Tanenhaus, and Aslin (2002). Participants heard tokens from six 9-step voice onset time (VOT) continua spanning 2 words (beach/peach, beak/peak, etc.) while viewing a screen containing pictures of those words and 2 unrelated objects. Participants selected the referent while eye movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Eye movements were sensitive to within-category VOT differences: As VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences in phonological categorization or auditory abilities.

  12. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  13. The interface between spoken and written language: developmental disorders.

    Science.gov (United States)

    Hulme, Charles; Snowling, Margaret J

    2014-01-01

    We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter-sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills).

  14. Language Shift or Increased Bilingualism in South Africa: Evidence from Census Data

    Science.gov (United States)

    Posel, Dorrit; Zeller, Jochen

    2016-01-01

    In the post-apartheid era, South Africa has adopted a language policy that gives official status to 11 languages (English, Afrikaans, and nine Bantu languages). However, English has remained the dominant language of business, public office, and education, and some research suggests that English is increasingly being spoken in domestic settings.…

  15. The representation of language within language : A syntactico-pragmatic typology of direct speech

    NARCIS (Netherlands)

    de Vries, M.

    The recursive phenomenon of direct speech (quotation) comes in many different forms, and it is arguably an important and widely used ingredient of both spoken and written language. This article builds on (and provides indirect support for) the idea that quotations are to be defined pragmatically as

  16. Genetic and Environmental Links between Natural Language Use and Cognitive Ability in Toddlers

    Science.gov (United States)

    Canfield, Caitlin F.; Edelson, Lisa R.; Saudino, Kimberly J.

    2017-01-01

    Although the phenotypic correlation between language and nonverbal cognitive ability is well-documented, studies examining the etiology of the covariance between these abilities are scant, particularly in very young children. The goal of this study was to address this gap in the literature by examining the genetic and environmental links between…

  17. Development of brain networks involved in spoken word processing of Mandarin Chinese.

    Science.gov (United States)

    Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R

    2011-08-01

    Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.

  18. Language Education Policies and Inequality in Africa: Cross-National Empirical Evidence

    Science.gov (United States)

    Coyne, Gary

    2015-01-01

    This article examines the relationship between inequality and education through the lens of colonial language education policies in African primary and secondary school curricula. The languages of former colonizers almost always occupy important places in society, yet they are not widely spoken as first languages, meaning that most people depend…

  19. Is it differences in language skills and working memory that account for girls being better at writing than boys?

    Directory of Open Access Journals (Sweden)

    Lorna Bourke

    2012-03-01

    Full Text Available Girls are more likely to outperform boys in the development of writing skills. This study considered gender differences in language and working memory skills as a possible explanation for the differential rates of progress. Sixty-seven children (31 males and 36 females (M age 57.30 months participated. Qualitative differences in writing progress were examined using a writing assessment scale from the Early Years Foundation Stage Profile (EYFSP. Quantitative measures of writing: number of words, diversity of words, number of phrases/sentences and grammatical complexity of the phrases/sentences were also analysed. The children were also assessed on tasks measuring their language production and comprehension skills and the visuo-spatial, phonological, and central executive components of working memory. The results indicated that the boys were more likely to perform significantly less well than the girls on all measures of writing except the grammatical complexity of sentences. Initially, no significant differences were found on any of the measures of language ability. Further, no significant differences were found between the genders on the capacity and efficiency of their working memory functioning. However, hierarchical regressions revealed that the individual differences in gender and language ability, more specifically spoken language comprehension, predicted performance on the EYFSP writing scale. This finding accords well with the literature that suggests that language skills can mediate the variance in boys’ and girls’ writing ability.

  20. False-Belief Understanding and Language Ability Mediate the Relationship between Emotion Comprehension and Prosocial Orientation in Preschoolers.

    Science.gov (United States)

    Ornaghi, Veronica; Pepe, Alessandro; Grazzani, Ilaria

    2016-01-01

    Emotion comprehension (EC) is known to be a key correlate and predictor of prosociality from early childhood. In the present study, we examined this relationship within the broad theoretical construct of social understanding which includes a number of socio-emotional skills, as well as cognitive and linguistic abilities. Theory of mind, especially false-belief understanding, has been found to be positively correlated with both EC and prosocial orientation. Similarly, language ability is known to play a key role in children's socio-emotional development. The combined contribution of false-belief understanding and language to explaining the relationship between EC and prosociality has yet to be investigated. Thus, in the current study, we conducted an in-depth exploration of how preschoolers' false-belief understanding and language ability each contribute to modeling the relationship between children's comprehension of emotion and their disposition to act prosocially toward others, after controlling for age and gender. Participants were 101 4- to 6-year-old children (54% boys), who were administered measures of language ability, false-belief understanding, EC and prosocial orientation. Multiple mediation analysis of the data suggested that false-belief understanding and language ability jointly and fully mediated the effect of preschoolers' EC on their prosocial orientation. Analysis of covariates revealed that gender exerted no statistically significant effect, while age had a trivial positive effect. Theoretical and practical implications of the findings are discussed.

  1. Business Spoken English Learning Strategies for Chinese Enterprise Staff

    Institute of Scientific and Technical Information of China (English)

    Han Li

    2013-01-01

    This study addresses the issue of promoting effective Business Spoken English of Enterprise Staff in China.It aims to assess the assessment of spoken English learning methods and identify the difficulties of learning English oral expression concerned business area.It also provides strategies for enhancing Enterprise Staff’s level of Business Spoken English.

  2. The Plausibility of Tonal Evolution in the Malay Dialect Spoken in Thailand: Evidence from an Acoustic Study

    Directory of Open Access Journals (Sweden)

    Phanintra Teeranon

    2007-12-01

    Full Text Available The F0 values of vowels following voiceless consonants are higher than those of vowels following voiced consonants; high vowels have a higher F0 than low vowels. It has also been found that when high vowels follow voiced consonants, the F0 values decrease. In contrast, low vowels following voiceless consonants show increasing F0 values. In other words, the voicing of initial consonants has been found to counterbalance the intrinsic F0 values of high and low vowels (House and Fairbanks 1953, Lehiste and Peterson 1961, Lehiste 1970, Laver 1994, Teeranon 2006. To test whether these three findings are applicable to a disyllabic language, the F0 values of high and low vowels following voiceless and voiced consonants were studied in a Malay dialect of the Austronesian language family spoken in Pathumthani Province, Thailand. The data was collected from three male informants, aged 30-35. The Praat program was used for acoustic analysis. The findings revealed the influence of the voicing of initial consonants on the F0 of vowels to be greater than that of the influence of vowel height. Evidence from this acoustic study shows the plausibility for the Malay dialect spoken in Pathumthani to become a tonal language by the influence of initial consonants rather by the influence of the high-low vowel dimension.

  3. A Shared Platform for Studying Second Language Acquisition

    Science.gov (United States)

    MacWhinney, Brian

    2017-01-01

    The study of second language acquisition (SLA) can benefit from the same process of datasharing that has proven effective in areas such as first language acquisition and aphasiology. Researchers can work together to construct a shared platform that combines data from spoken and written corpora, online tutors, and Web-based experimentation. Many of…

  4. Corpus Based Authenicity Analysis of Language Teaching Course Books

    Directory of Open Access Journals (Sweden)

    Emrah PEKSOY

    2017-12-01

    Full Text Available In this study, the resemblance of the language learning course books used in Turkey to authentic language spoken by native speakers is explored by using a corpus-based approach. For this, the 10-million-word spoken part of the British National Corpus was selected as reference corpus. After that, all language learning course books used in high schools in Turkey were scanned and transferred to SketchEngine, an online corpus query tool. Lastly, certain grammar points were extracted first from British National Corpus and then from course books; similaritites and differences were compared. At the end of the study, it was found that the language learning course books have little similarity to authentic language in terms of certain grammatical items and frequency of their collocations. In this way, the points to be revised and changed were explored. In addition, this study emphasized the role of corpus approach as a material development and analysis tool; and tested the functionality of course books for writers and for Ministry of National Education.

  5. Japanese Non Resident Language Refresher Course; 210 Hour Course.

    Science.gov (United States)

    Defense Language Inst., Washington, DC.

    This military intelligence unit refresher course in Japanese is designed for 210 hours of audiolingual instruction. The materials, prepared by the Defense Language Institute, are intended for students with considerable intensive training in spoken and written Japanese who are preparing for a military language assignment. [Not available in hard…

  6. Language skills of children during the first 12 months after stuttering onset.

    Science.gov (United States)

    Watts, Amy; Eadie, Patricia; Block, Susan; Mensah, Fiona; Reilly, Sheena

    2017-03-01

    To describe the language development in a sample of young children who stutter during the first 12 months after stuttering onset was reported. Language production was analysed in a sample of 66 children who stuttered (aged 2-4 years). The sample were identified from a pre-existing prospective, community based longitudinal cohort. Data were collected at three time points within the first year after stuttering onset. Stuttering severity was measured, and global indicators of expressive language proficiency (length of utterances and grammatical complexity) were derived from the samples and summarised. Language production abilities of the children who stutter were contrasted with normative data. The majority of children's stuttering was rated as mild in severity, with more than 83% of participants demonstrating very mild or mild stuttering at each of the time points studied. The participants demonstrated developmentally appropriate spoken language skills comparable with available normative data. In the first year following the report of stuttering onset, the language skills of the children who were stuttering progressed in a manner that is consistent with developmental expectations. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Altering Practices to Include Bimodal-bilingual (ASL-Spoken English) Programming at a Small School for the Deaf in Canada.

    Science.gov (United States)

    Priestley, Karen; Enns, Charlotte; Arbuckle, Shauna

    2018-01-01

    Bimodal-bilingual programs are emerging as one way to meet broader needs and provide expanded language, educational and social-emotional opportunities for students who are deaf and hard of hearing (Marschark, M., Tang, G. & Knoors, H. (Eds). (2014). Bilingualism and bilingual Deaf education. New York, NY: Oxford University Press; Paludneviciene & Harris, R. (2011). Impact of cochlear implants on the deaf community. In Paludneviciene, R. & Leigh, I. (Eds.), Cochlear implants evolving perspectives (pp. 3-19). Washington, DC: Gallaudet University Press). However, there is limited research on students' spoken language development, signed language growth, academic outcomes or the social-emotional factors associated with these programs (Marschark, M., Tang, G. & Knoors, H. (Eds). (2014). Bilingualism and bilingual Deaf education. New York, NY: Oxford University Press; Nussbaum, D & Scott, S. (2011). The cochlear implant education center: Perspectives on effective educational practices. In Paludneviciene, R. & Leigh, I. (Eds.) Cochlear implants evolving perspectives (pp. 175-205). Washington, DC: Gallaudet University Press. The cochlear implant education center: Perspectives on effective educational practices. In Paludnevicience & Leigh (Eds). Cochlear implants evolving perspectives (pp. 175-205). Washington, DC: Gallaudet University Press; Spencer, P. & Marschark, M. (Eds.) (2010). Evidence-based practice in educating deaf and hard-of-hearing students. New York, NY: Oxford University Press). The purpose of this case study was to look at formal and informal student outcomes as well as staff and parent perceptions during the first 3 years of implementing a bimodal-bilingual (ASL and spoken English) program within an ASL milieu at a small school for the deaf. Speech and language assessment results for five students were analyzed over a 3-year period and indicated that the students made significant positive gains in all areas, although results were variable. Staff and parent

  8. The impact of generic language about ability on children's achievement motivation.

    Science.gov (United States)

    Cimpian, Andrei

    2010-09-01

    Nuances in how adults talk about ability may have important consequences for children's sustained involvement and success in an activity. In this study, I tested the hypothesis that children would be less motivated while performing a novel activity if they were told that boys or girls in general are good at this activity (generic language) than if they were told that a particular boy or girl is good at it (non-generic language). Generic language may be detrimental because it expresses normative societal expectations regarding performance. If these expectations are negative, they may cause children to worry about confirming them; if positive, they may cause worries about failing to meet them. Moreover, generic statements may be threatening because they imply that performance is the result of stable traits rather than effort. Ninety-seven 4- to 7-year-olds were asked to play a game in which they succeeded at first but then made a few mistakes. Since young children remain optimistic in achievement situations until the possibility of failure is made clear, I hypothesized that 4- and 5-year-olds would not be affected by the implications of generic language until after they made mistakes; 6- and 7-year-olds, however, may be susceptible earlier. As expected, the older children who heard that boys or girls are good at this game displayed lower motivation (e.g., more negative emotions, lower perceived competence) from the start, while they were still succeeding and receiving praise. Four- and 5-year-olds who heard these generic statements had a similar reaction, but only after they made mistakes. These findings demonstrate that exposure to generic language about ability can be an obstacle to children's motivation and, potentially, their success.

  9. Improving English Language Ability of Children Aged 4-5 Years Old by Using Creative Dance

    Directory of Open Access Journals (Sweden)

    Sabila Nur Masturah

    2018-03-01

    Full Text Available The aim of this research is to know about how to improve English language ability of children aged 4-5 years old by using creative dance. The subjects of this research were seven children in group A at Bilingual Kindergarten Rumah Pelangi Pondok Bambu, East Jakarta. This research was held during April-June, 2016. The method used is classroom action research proposed by Kemmis and Taggart in two cycles. Each cycle consists of planning, acting, observing, and reflecting. The children’s English language ability was still low. The presentation of success dealt between the researcher and collaborator was 71%. The result of data analysis of pre-research was 42,1%. After being given the action, the percentage increased to 61,87%. The data got from the first cycle has not achieved its target, so the researcher conducted the second cycle. The result was 80,41%. Based on the result in the second cycle, the hypothesis is proved. Qualitatively, it is also admitted that the children’s English language ability could improve their creative movement.source language using the incorrect grammatical, the sentence is vague, the idea is not coherent and many pungtuations.

  10. Bilingualism alters brain functional connectivity between "control" regions and "language" regions: Evidence from bimodal bilinguals.

    Science.gov (United States)

    Li, Le; Abutalebi, Jubin; Zou, Lijuan; Yan, Xin; Liu, Lanfang; Feng, Xiaoxia; Wang, Ruiming; Guo, Taomei; Ding, Guosheng

    2015-05-01

    Previous neuroimaging studies have revealed that bilingualism induces both structural and functional neuroplasticity in the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (LCN), both of which are associated with cognitive control. Since these "control" regions should work together with other language regions during language processing, we hypothesized that bilingualism may also alter the functional interaction between the dACC/LCN and language regions. Here we tested this hypothesis by exploring the functional connectivity (FC) in bimodal bilinguals and monolinguals using functional MRI when they either performed a picture naming task with spoken language or were in resting state. We found that for bimodal bilinguals who use spoken and sign languages, the FC of the dACC with regions involved in spoken language (e.g. the left superior temporal gyrus) was stronger in performing the task, but weaker in the resting state as compared to monolinguals. For the LCN, its intrinsic FC with sign language regions including the left inferior temporo-occipital part and right inferior and superior parietal lobules was increased in the bilinguals. These results demonstrate that bilingual experience may alter the brain functional interaction between "control" regions and "language" regions. For different control regions, the FC alters in different ways. The findings also deepen our understanding of the functional roles of the dACC and LCN in language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Language studies in higher education in the Netherlands

    NARCIS (Netherlands)

    Zwarts, Frans; Silkens, B.

    1995-01-01

    Dutch is one of the two official languages of the Netherlands. It is the mother tongue of 15 million Dutchmen and 5,5 million Belgians. The second official language is Frisian, which is spoken by the 500,000 inhabitants of Friesland - a province of the Netherlands, 1,248 square miles in area, in the

  12. Precursors to language in preterm infants: speech perception abilities in the first year of life.

    Science.gov (United States)

    Bosch, Laura

    2011-01-01

    Language development in infants born very preterm is often compromised. Poor language skills have been described in preschoolers and differences between preterms and full terms, relative to early vocabulary size and morphosyntactical complexity, have also been identified. However, very few data are available concerning early speech perception abilities and their predictive value for later language outcomes. An overview of the results obtained in a prospective study exploring the link between early speech perception abilities and lexical development in the second year of life in a population of very preterm infants (≤32 gestation weeks) is presented. Specifically, behavioral measures relative to (a) native-language recognition and discrimination from a rhythmically distant and a rhythmically close nonfamiliar languages, and (b) monosyllabic word-form segmentation, were obtained and compared to data from full-term infants. Expressive vocabulary at two test ages (12 and 18 months, corrected age for gestation) was measured using the MacArthur Communicative Development Inventory. Behavioral results indicated that differences between preterm and control groups were present, but only evident when task demands were high in terms of language processing, selective attention to relevant information and memory load. When responses could be based on acquired knowledge from accumulated linguistic experience, between-group differences were no longer observed. Critically, while preterm infants responded satisfactorily to the native-language recognition and discrimination tasks, they clearly differed from full-term infants in the more challenging activity of extracting and retaining word-form units from fluent speech, a fundamental ability for starting to building a lexicon. Correlations between results from the language discrimination tasks and expressive vocabulary measures could not be systematically established. However, attention time to novel words in the word segmentation

  13. Cortical networks for vision and language in dyslexic and normal children of variable socio-economic status.

    Science.gov (United States)

    Monzalvo, Karla; Fluss, Joel; Billard, Catherine; Dehaene, Stanislas; Dehaene-Lambertz, Ghislaine

    2012-05-15

    In dyslexia, anomalous activations have been described in both left temporo-parietal language cortices and in left ventral visual occipito-temporal cortex. However, the reproducibility, task-dependency, and presence of these brain anomalies in childhood rather than adulthood remain debated. We probed the large-scale organization of ventral visual and spoken language areas in dyslexic children using minimal target-detection tasks that were performed equally well by all groups. In 23 normal and 23 dyslexic 10-year-old children from two different socio-economic status (SES) backgrounds, we compared fMRI activity to visually presented houses, faces, and written strings, and to spoken sentences in the native or in a foreign language. Our results confirm a disorganization of both ventral visual and spoken language areas in dyslexic children. Visually, dyslexic children showed a normal lateral-to-medial mosaic of preferences, as well as normal responses to houses and checkerboards, but a reduced activation to words in the visual word form area (VWFA) and to faces in the right fusiform face area (FFA). Auditorily, dyslexic children exhibited reduced responses to speech in posterior temporal cortex, left insula and supplementary motor area, as well as reduced responses to maternal language in subparts of the planum temporale, left basal language area and VWFA. By correlating these two findings, we identify spoken-language predictors of VWFA activation to written words, which differ for dyslexic and normal readers. Similarities in fMRI deficits in both SES groups emphasize the existence of a core set of brain activation anomalies in dyslexia, regardless of culture, language and SES, without however resolving whether these anomalies are a cause or a consequence of impaired reading. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Language choice in bimodal bilingual development

    Directory of Open Access Journals (Sweden)

    Diane eLillo-Martin

    2014-10-01

    Full Text Available Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children.Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending – expressions in both speech and sign simultaneously – an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children’s language choices.This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult.Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant

  15. Measuring young children's language abilities.

    Science.gov (United States)

    Zink, I; Schaerlaekens, A

    2000-01-01

    This article deals with the new challenges put on language diagnosis, and the growing need for good diagnostic instruments for young children. Particularly for Dutch, the original English Reynell Developmental Language Scales were adapted not only to the Dutch idiom, but some general ameliorations and changes in the original scales resulted in a new instrument named the RTOS. The new instrument was standardized on a large population, and psychometrically evaluated. In communicating the experiences with such a language/cultural/psychometric adaptation, we hope that other language-minority groups will be encouraged to undertake similar adaptations.

  16. Coaching Parents to Use Naturalistic Language and Communication Strategies

    Science.gov (United States)

    Akamoglu, Yusuf; Dinnebeil, Laurie

    2017-01-01

    Naturalistic language and communication strategies (i.e., naturalistic teaching strategies) refer to practices that are used to promote the child's language and communication skills either through verbal (e.g., spoken words) or nonverbal (e.g., gestures, signs) interactions between an adult (e.g., parent, teacher) and a child. Use of naturalistic…

  17. Language Immersion in the Self-Study Mode E-Course

    Science.gov (United States)

    Sobolev, Olga

    2016-01-01

    This paper assesses the efficiency of the "Language Immersion e-Course" developed at the London School of Economics and Political Science (LSE) Language Centre. The new self-study revision e-course, promoting students' proficiency in spoken and aural Russian through autonomous learning, is based on the Michel Thomas method, and is…

  18. Towards a Sign Language Synthesizer: a Bridge to Communication Gap of the Hearing/Speech Impaired Community

    Science.gov (United States)

    Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.

    2013-12-01

    Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.

  19. Bridging the Gap: The Development of Appropriate Educational Strategies for Minority Language Communities in the Philippines

    Science.gov (United States)

    Dekker, Diane; Young, Catherine

    2005-01-01

    There are more than 6000 languages spoken by the 6 billion people in the world today--however, those languages are not evenly divided among the world's population--over 90% of people globally speak only about 300 majority languages--the remaining 5700 languages being termed "minority languages". These languages represent the…

  20. The Relationship Between Second Language Anxiety and International Nursing Students Stress

    Science.gov (United States)

    Khawaja, Nigar G.; Chan, Sabrina; Stein, Georgia

    2017-01-01

    We examined the relationship between second language anxiety and international nursing student stress after taking into account the demographic, cognitive, and acculturative factors. International nursing students (N = 152) completed an online questionnaire battery. Hierarchical regression analysis revealed that spoken second language anxiety and…

  1. False-belief understanding and language ability mediate the relationship between emotion comprehension and prosocial orientation in preschoolers

    Directory of Open Access Journals (Sweden)

    Veronica Ornaghi

    2016-10-01

    Full Text Available Emotion comprehension is known to be a key correlate and predictor of prosociality from early childhood. The present study look at their relation within the wide theoretical construct of social understanding which includes a number of socio-emotional skills, as well as cognitive and linguistic abilities. Theory of mind, especially false-belief understanding, has been found to have positive correlations with both emotion comprehension and prosocial orientation. Similarly, language ability is known to play a key role in children’s socio-emotional development. The combined contribution of both false-belief understanding and language in explaining the relation between emotion comprehension and prosociality has yet to be investigated. Thus, in the current study, we conducted an in-depth exploration of how preschoolers’ false-belief understanding and language ability each contribute to modeling the relationship between their comprehension of emotion and their disposition to act prosocially towards others, after controlling for age and gender. Participants were 101 4-to-6 year old children (54% boys, who were administered measures of language ability, false-belief understanding, emotion comprehension and prosocial orientation. Multiple mediation analysis of the data suggested that false-belief understanding and language ability jointly and fully mediated the effect of preschoolers’ emotion comprehension on their prosocial orientation. Analysis of covariates revealed that gender exerted no statistically significant effect, while age had a trivial positive effect. Theoretical and practical implications of the findings are discussed.

  2. Assessment of communication abilities in multilingual children: Language rights or human rights?

    Science.gov (United States)

    Cruz-Ferreira, Madalena

    2018-02-01

    Communication involves a sender, a receiver and a shared code operating through shared rules. Breach of communication results from disruption to any of these basic components of a communicative chain, although assessment of communication abilities typically focuses on senders/receivers, on two assumptions: first, that their command of features and rules of the language in question (the code), such as sounds, words or word order, as described in linguists' theorisations, represents the full scope of linguistic competence; and second, that languages are stable, homogeneous entities, unaffected by their users' communicative needs. Bypassing the role of the code in successful communication assigns decisive rights to abstract languages rather than to real-life language users, routinely leading to suspected or diagnosed speech-language disorder in academic and clinical assessment of multilingual children's communicative skills. This commentary reflects on whether code-driven assessment practices comply with the spirit of Article 19 of the Universal Declaration of Human Rights.

  3. The English-Language and Reading Achievement of a Cohort of Deaf Students Speaking and Signing Standard English: A Preliminary Study.

    Science.gov (United States)

    Nielsen, Diane Corcoran; Luetke, Barbara; McLean, Meigan; Stryker, Deborah

    2016-01-01

    Research suggests that English-language proficiency is critical if students who are deaf or hard of hearing (D/HH) are to read as their hearing peers. One explanation for the traditionally reported reading achievement plateau when students are D/HH is the inability to hear insalient English morphology. Signing Exact English can provide visual access to these features. The authors investigated the English morphological and syntactic abilities and reading achievement of elementary and middle school students at a school using simultaneously spoken and signed Standard American English facilitated by intentional listening, speech, and language strategies. A developmental trend (and no plateau) in language and reading achievement was detected; most participants demonstrated average or above-average English. Morphological awareness was prerequisite to high test scores; speech was not significantly correlated with achievement; language proficiency, measured by the Clinical Evaluation of Language Fundamentals-4 (Semel, Wiig, & Secord, 2003), predicted reading achievement.

  4. Working with the Bilingual Child Who Has a Language Delay. Meeting Learning Challenges

    Science.gov (United States)

    Greenspan, Stanley I.

    2005-01-01

    It is very important to determine if a bilingual child's language delay is simply in English or also in the child's native language. Understandably, many children have higher levels of language development in the language spoken at home. To discover if this is the case, observe the child talking with his parents. Sometimes, even without…

  5. Universal brain signature of proficient reading: Evidence from four contrasting languages.

    Science.gov (United States)

    Rueckl, Jay G; Paz-Alonso, Pedro M; Molfese, Peter J; Kuo, Wen-Jui; Bick, Atira; Frost, Stephen J; Hancock, Roeland; Wu, Denise H; Mencl, William Einar; Duñabeitia, Jon Andoni; Lee, Jun-Ren; Oliver, Myriam; Zevin, Jason D; Hoeft, Fumiko; Carreiras, Manuel; Tzeng, Ovid J L; Pugh, Kenneth R; Frost, Ram

    2015-12-15

    We propose and test a theoretical perspective in which a universal hallmark of successful literacy acquisition is the convergence of the speech and orthographic processing systems onto a common network of neural structures, regardless of how spoken words are represented orthographically in a writing system. During functional MRI, skilled adult readers of four distinct and highly contrasting languages, Spanish, English, Hebrew, and Chinese, performed an identical semantic categorization task to spoken and written words. Results from three complementary analytic approaches demonstrate limited language variation, with speech-print convergence emerging as a common brain signature of reading proficiency across the wide spectrum of selected languages, whether their writing system is alphabetic or logographic, whether it is opaque or transparent, and regardless of the phonological and morphological structure it represents.

  6. The language of football

    DEFF Research Database (Denmark)

    Rossing, Niels Nygaard; Skrubbeltrang, Lotte Stausgaard

    2014-01-01

    levels (Schein, 2004) in which each player and his actions can be considered an artefact - a concrete symbol in motion embedded in espoused values and basic assumptions. Therefore, the actions of each dialect are strongly connected to the underlying understanding of football. By document and video......The language of football: A cultural analysis of selected World Cup nations. This essay describes how actions on the football field relate to the nations’ different cultural understanding of football and how these actions become spoken dialects within a language of football. Saussure reasoned...... language to have two components: a language system and language users (Danesi, 2003). Consequently, football can be characterized as a language containing a system with specific rules of the game and users with actual choices and actions within the game. All football players can be considered language...

  7. Examining Transcription, Autonomy and Reflective Practice in Language Development

    Science.gov (United States)

    Cooke, Simon D.

    2013-01-01

    This pilot study explores language development among a class of L2 students who were required to transcribe and reflect upon spoken performances. The class was given tasks for self and peer-evaluation and afforded the opportunity to assume more responsibility for assessing language development of both themselves and their peers. Several studies…

  8. Teaching English as a "Second Language" in Kenya and the United States: Convergences and Divergences

    Science.gov (United States)

    Roy-Campbell, Zaline M.

    2015-01-01

    English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…

  9. Advances in natural language processing.

    Science.gov (United States)

    Hirschberg, Julia; Manning, Christopher D

    2015-07-17

    Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.

  10. Continuous-speech segmentation at the beginning of language acquisition: electrophysiological evidence

    NARCIS (Netherlands)

    Kooijman, V.M.

    2007-01-01

    Word segmentation, or detecting word boundaries in continuous speech, is not an easy task. Spoken language does not contain silences to indicate word boundaries and words partly overlap due to coarticalution. Still, adults listening to their native language perceive speech as individual words. They

  11. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  12. Morphosyntactic constructs in the development of spoken and written Hebrew text production.

    Science.gov (United States)

    Ravid, Dorit; Zilberbuch, Shoshana

    2003-05-01

    This study examined the distribution of two Hebrew nominal structures-N-N compounds and denominal adjectives-in spoken and written texts of two genres produced by 90 native-speaking participants in three age groups: eleven/twelve-year-olds (6th graders), sixteen/seventeen-year-olds (11th graders), and adults. The two constructions are later linguistic acquisitions, part of the profound lexical and syntactic changes that occur in language development during the school years. They are investigated in the context of learning how modality (speech vs. writing) and genre (biographical vs. expository texts) affect the production of continuous discourse. Participants were asked to speak and write about two topics, one biographical, describing the life of a public figure or of a friend; and another, expository, discussing one of ten topics such as the cinema, cats, or higher academic studies. N-N compounding was found to be the main device of complex subcategorization in Hebrew discourse, unrelated to genre. Denominal adjectives are a secondary subcategorizing device emerging only during the late teen years, a linguistic resource untapped until very late, more restricted to specific text types than N-N compounding, and characteristic of expository writing. Written texts were found to be denser than spoken texts lexically and syntactically as measured by number of novel N-N compounds and denominal adjectives per clause, and in older age groups this difference was found to be more pronounced. The paper contributes to our understanding of how the syntax/lexicon interface changes with age, modality and genre in the context of later language acquisition.

  13. ORIGINAL ARTICLES How do doctors learn the spoken language of ...

    African Journals Online (AJOL)

    2009-07-01

    Jul 1, 2009 ... and cultural metaphors of illness as part of language learning. The theory of .... role.21 Even in a military setting, where soldiers learnt Korean or Spanish as part of ... own language – a cross-cultural survey. Brit J Gen Pract ...

  14. Listening in first and second language

    NARCIS (Netherlands)

    Farrell, J.; Cutler, A.; Liontas, J.I.

    2018-01-01

    Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to

  15. Competition dynamics of second-language listening

    NARCIS (Netherlands)

    Broersma, M.; Cutler, A.

    2011-01-01

    Spoken-word recognition in a nonnative language is particularly difficult where it depends on discrimination between confusable phonemes. Four experiments here examine whether this difficulty is in part due to phantom competition from onear-wordso in speech. Dutch listeners confuse English /ae/ and

  16. Key Data on Teaching Languages at School in Europe. 2017 Edition. Eurydice Report

    Science.gov (United States)

    Baïdak, Nathalie; Balcon, Marie-Pascale; Motiejunaite, Akvile

    2017-01-01

    Linguistic diversity is part of Europe's DNA. It embraces not only the official languages of Member States, but also the regional and/or minority languages spoken for centuries on European territory, as well as the languages brought by the various waves of migrants. The coexistence of this variety of languages constitutes an asset, but it is also…

  17. Morphosyntactic correctness of written language production in adults with moderate to severe congenital hearing loss

    NARCIS (Netherlands)

    Huysmans, Elke; de Jong, Jan; Festen, Joost M.; Coene, Martine M.R.; Goverts, S. Theo

    2017-01-01

    Objective To examine whether moderate to severe congenital hearing loss (MSCHL) leads to persistent morphosyntactic problems in the written language production of adults, as it does in their spoken language production. Design Samples of written language in Dutch were analysed for morphosyntactic

  18. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    Science.gov (United States)

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Cognitive abilities underlying second-language vocabulary acquisition in an early second-language immersion education context: a longitudinal study.

    Science.gov (United States)

    Nicolay, Anne-Catherine; Poncelet, Martine

    2013-08-01

    First-language (L1) and second-language (L2) lexical development has been found to be strongly associated with phonological processing abilities such as phonological short-term memory (STM), phonological awareness, and speech perception. Lexical development also seems to be linked to attentional and executive skills such as auditory attention, flexibility, and response inhibition. The aim of this four-wave longitudinal study was to determine to what extent L2 vocabulary acquired through the particular school context of early L2 immersion education is linked to the same cognitive abilities. A total of 61 French-speaking 5-year-old kindergartners who had just been enrolled in English immersion classes were administered a battery of tasks assessing these three phonological processing abilities and three attentional/executive skills. Their English vocabulary knowledge was measured 1, 2, and 3 school years later. Multiple regression analyses showed that, among the assessed phonological processing abilities, phonological STM and speech perception, but not phonological awareness, appeared to underlie L2 vocabulary acquisition in this context of an early L2 immersion school program, at least during the first steps of acquisition. Similarly, among the assessed attentional/executive skills, auditory attention and flexibility, but not response inhibition, appeared to be involved during the first steps of L2 vocabulary acquisition in such an immersion school context. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Language Planning for the 21st Century: Revisiting Bilingual Language Policy for Deaf Children

    Science.gov (United States)

    Knoors, Harry; Marschark, Marc

    2012-01-01

    For over 25 years in some countries and more recently in others, bilingual education involving sign language and the written/spoken vernacular has been considered an essential educational intervention for deaf children. With the recent growth in universal newborn hearing screening and technological advances such as digital hearing aids and…

  1. Tone Language Speakers and Musicians Share Enhanced Perceptual and Cognitive Abilities for Musical Pitch: Evidence for Bidirectionality between the Domains of Language and Music

    Science.gov (United States)

    Bidelman, Gavin M.; Hutka, Stefanie; Moreno, Sylvain

    2013-01-01

    Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language. PMID:23565267

  2. Tone language speakers and musicians share enhanced perceptual and cognitive abilities for musical pitch: evidence for bidirectionality between the domains of language and music.

    Science.gov (United States)

    Bidelman, Gavin M; Hutka, Stefanie; Moreno, Sylvain

    2013-01-01

    Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language.

  3. Tone language speakers and musicians share enhanced perceptual and cognitive abilities for musical pitch: evidence for bidirectionality between the domains of language and music.

    Directory of Open Access Journals (Sweden)

    Gavin M Bidelman

    Full Text Available Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory. While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language.

  4. Language and the origin of numerical concepts.

    Science.gov (United States)

    Gelman, Rochel; Gallistel, C R

    2004-10-15

    Reports of research with the Pirahã and Mundurukú Amazonian Indians of Brazil lend themselves to discussions of the role of language in the origin of numerical concepts. The research findings indicate that, whether or not humans have an extensive counting list, they share with nonverbal animals a language-independent representation of number, with limited, scale-invariant precision. What causal role, then, does knowledge of the language of counting serve? We consider the strong Whorfian proposal, that of linguistic determinism; the weak Whorfian hypothesis, that language influences how we think; and that the "language of thought" maps to spoken language or symbol systems.

  5. Socio-Pragmatic Problems in Foreign Language Teaching

    Directory of Open Access Journals (Sweden)

    İsmail ÇAKIR

    2006-10-01

    Full Text Available It is a fact that language is a means of communication for human beings. People who needto have social interaction should share the same language, beliefs, values etc., in a given society.It can be stated that when learning a foreign language, mastering only linguistic features of FLprobably does not ensure true spoken and written communication. This study aims to deal withsocio-pragmatic problems which the learners may be confront with while learning and using theforeign language. Particularly cultural and cultural values of the target language such as idioms,proverbs and metaphors and their role in foreign language teaching have been focused on.

  6. Pinpointing the classifiers of English language writing ability: A discriminant function analysis approach

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Shams

    2013-02-01

    Full Text Available     The major aim of this paper was to investigate the validity of language and intelligence factors for classifying Iranian English learners` writing performance. Iranian participants of the study took three tests for grammar, breadth, and depth of vocabulary, and two tests for verbal and narrative intelligence. They also produced a corpus of argumentative writings in answer to IELTS specimen. Several runs of discriminant function analyses were used to examine the classifying power of the five variables for discriminating between low and high ability L2 writers. The results revealed that among language factors, depth of vocabulary (collocational knowledge produces the best discriminant function. In general, narrative intelligence was found to be the most reliable predictor for membership in low or high groups. It was also found that, among the five sub-abilities of narrative intelligence, emplotment carries the highest classifying value. Finally, the applications and implications of the results for second language researchers, cognitive scientists, and applied linguists were discussed.Â

  7. Channel-dependent GMM and multi-class logistic: Regression models for language recognition

    NARCIS (Netherlands)

    Leeuwen, D.A. van; Brümmer, Niko

    2006-01-01

    This paper describes two new approaches to spoken language recognition. These were both successfully applied in the NIST 2005 Language Recognition Evaluation. The first approach extends the Gaussian Mixture Model technique with channel dependency, which results in actual detection costs (CDET) of

  8. Play to Learn: Self-Directed Home Language Literacy Acquisition through Online Games

    Science.gov (United States)

    Eisenchlas, Susana A.; Schalley, Andrea C.; Moyes, Gordon

    2016-01-01

    Home language literacy education in Australia has been pursued predominantly through Community Language Schools. At present, some 1,000 of these, attended by over 100,000 school-age children, cater for 69 of the over 300 languages spoken in Australia. Despite good intentions, these schools face a number of challenges. For instance, children may…

  9. A grammar of Tadaksahak a northern Songhay language of Mali

    NARCIS (Netherlands)

    Christiansen-Bolli, Regula

    2010-01-01

    This dissertation is a descriptive grammar of the language Tadaksahak spoken by about 30,000 people living in the most eastern part of Mali. The four chapters of the book give 1. Information about the background of the group. 2. The phonological features of the language with the inventory of the

  10. State-of-the-art in the development of the Lokono language

    NARCIS (Netherlands)

    Rybka, K.

    2015-01-01

    Lokono is a critically endangered Northern Arawakan language spoken in the peri- coastal areas of the Guianas (Guyana, Suriname, French Guiana). Today, in every Lokono village there remains only a small number of elderly native speakers. However, in spite of the ongoing language loss, across the

  11. The Differences Between Men And Women Language Styles In Writing Twitter Updates

    OpenAIRE

    FATIN, MARSHELINA

    2014-01-01

    Fatin, Marshelina. 2013. The Differences between Men and Women LanguageStyles in Writing Twitter Updates. Study Program of English, UniversitasBrawijaya. Supervisor: Isti Purwaningtyas; Co-supervisor: Muhammad Rozin.Keywords: Twitter, Twitter updates, Language style, Men language, Womenlanguage. The language which is used by people has so many differences. The differences itself are associated with men and women which belong to gender. If there are differences in spoken language, written lang...

  12. The Arabic Natural Language Processing: Introduction and Challenges

    Directory of Open Access Journals (Sweden)

    Boukhatem Nadera

    2014-09-01

    Full Text Available Arabic is a Semitic language spoken by more than 330 million people as a native language, in an area extending from the Arabian/Persian Gulf in the East to the Atlantic Ocean in the West. Moreover, it is the language in which 1.4 billion Muslims around the world perform their daily prayers. Over the last few years, Arabic natural language processing (ANLP has gained increasing importance, and several state of the art systems have been developed for a wide range of applications.

  13. IMPACT ON THE INDIGENOUS LANGUAGES SPOKEN IN NIGERIA ...

    African Journals Online (AJOL)

    In the face of globalisation, the scale of communication is increasing from being merely .... capital goods and services across national frontiers involving too, political contexts of ... auditory and audiovisual entertainment, the use of English dominates. The language .... manners, entertainment, sports, the legal system, etc.

  14. Musical, language and reading abilities in early Portuguese readers

    Directory of Open Access Journals (Sweden)

    Jennifer eZuk

    2013-06-01

    Full Text Available Early language and reading abilities have been shown to correlate with a variety of musical skills and elements of music perception in children. It has also been shown that reading impaired children can show difficulties with music perception. However, it is still unclear to what extent different aspects of music perception are associated with language and reading abilities. Here we investigated the relationship between cognitive-linguistic abilities and a music discrimination task that preserves an ecologically valid musical experience. Forty-three Portuguese-speaking students from an elementary school in Brazil participated in this study. Children completed a comprehensive cognitive-linguistic battery of assessments. The music task was presented live in the music classroom, and children were asked to code sequences of four sounds on the guitar. Results show a strong relationship between performance on the music task and a number of linguistic variables. A Principle Component Analysis of the cognitive-linguistic battery revealed that the strongest component (Prin1 accounted for 33% of the variance and Prin1 was significantly related to the music task. Highest loadings on Prin1 were found for reading measures such as Reading Speed and Reading Accuracy. Interestingly, twenty-two children recorded responses for more than four sounds within a trial on the music task, which was classified as Superfluous Responses (SR. SR was negatively correlated with a variety of linguistic variables and showed a negative correlation with Prin1. When analyzing children with and without SR separately, only children with SR showed a significant correlation between Prin1 and the music task. Our results provide implications for the use of an ecologically valid music-based screening tool for the early identification of reading disabilities in a classroom setting.

  15. Singing can facilitate foreign language learning.

    Science.gov (United States)

    Ludke, Karen M; Ferreira, Fernanda; Overy, Katie

    2014-01-01

    This study presents the first experimental evidence that singing can facilitate short-term paired-associate phrase learning in an unfamiliar language (Hungarian). Sixty adult participants were randomly assigned to one of three "listen-and-repeat" learning conditions: speaking, rhythmic speaking, or singing. Participants in the singing condition showed superior overall performance on a collection of Hungarian language tests after a 15-min learning period, as compared with participants in the speaking and rhythmic speaking conditions. This superior performance was statistically significant (p sing" learning method can facilitate verbatim memory for spoken foreign language phrases.

  16. Phonological memory in sign language relies on the visuomotor neural system outside the left hemisphere language network.

    Science.gov (United States)

    Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi

    2017-01-01

    Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to

  17. Practitioner review: the assessment of language pragmatics.

    Science.gov (United States)

    Adams, Catherine

    2002-11-01

    The assessment of pragmatics expressed in spoken language is a central issue in the evaluation of children with communication impairments and related disorders. A developmental approach to assessment has remained problematic due to the complex interaction of social, linguistic, cognitive and cultural influences on pragmatics. A selective review and critique of current formal and informal testing methods and pragmatic analytic procedures. Formal testing of pragmatics has limited potential to reveal the typical pragmatic abnormalities in interaction but has a significant role to play in the assessment of comprehension of pragmatic intent. Clinical assessment of pragmatics with the pre-school child should focus on elicitation of communicative intent via naturalistic methods as part of an overall assessment of social communication skills. Assessments for older children should include a comprehensive investigation of speech acts, conversational and narrative abilities, the understanding of implicature and intent as well as the child's ability to employ contextual cues to understanding. Practical recommendations are made regarding the choice of a core set of pragmatic assessments and elicitation techniques. The practitioner's attention is drawn to the lack of the usual safeguards of reliability and validity that have persisted in some language pragmatics assessments. A core set of pragmatic assessment tools can be identified from the proliferation of instruments in current use. Further research is required to establish clearer norms and ranges in the development of pragmatic ability, particularly with respect to the understanding of inference, topic management and coherence.

  18. On Spoken English Phoneme Evaluation Method Based on Sphinx-4 Computer System

    Directory of Open Access Journals (Sweden)

    Li Qin

    2017-12-01

    Full Text Available In oral English learning, HDPs (phonemes that are hard to be distinguished are areas where Chinese students frequently make mistakes in pronunciation. This paper studies a speech phoneme evaluation method for HDPs, hoping to improve the ability of individualized evaluation on HDPs and help provide a personalized learning platform for English learners. First of all, this paper briefly introduces relevant phonetic recognition technologies and pronunciation evaluation algorithms and also describes the phonetic retrieving, phonetic decoding and phonetic knowledge base in the Sphinx-4 computer system, which constitute the technological foundation for phoneme evaluation. Then it proposes an HDP evaluation model, which integrates the reliability of the speech processing system and the individualization of spoken English learners into the evaluation system. After collecting HDPs of spoken English learners and sorting them into different sets, it uses the evaluation system to recognize these HDP sets and at last analyzes the experimental results of HDP evaluation, which proves the effectiveness of the HDP evaluation model.

  19. ASSESSING THE SO CALLED MARKED INFLECTIONAL FEATURES OF NIGERIAN ENGLISH: A SECOND LANGUAGE ACQUISITION THEORY ACCOUNT

    OpenAIRE

    Boluwaji Oshodi

    2014-01-01

    There are conflicting claims among scholars on whether the structural outputs of the types of English spoken in countries where English is used as a second language gives such speech forms the status of varieties of English. This study examined those morphological features considered to be marked features of the variety spoken in Nigeria according to Kirkpatrick (2011) and the variety spoken in Malaysia by considering the claims of the Missing Surface Inflection Hypothesis (MSIH) a Second Lan...

  20. The influence of talker and foreign-accent variability on spoken word identification.

    Science.gov (United States)

    Bent, Tessa; Holt, Rachael Frush

    2013-03-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.

  1. Language and Literacy Development of Deaf and Hard-of-Hearing Children: Successes and Challenges

    Science.gov (United States)

    Lederberg, Amy R.; Schick, Brenda; Spencer, Patricia E.

    2013-01-01

    Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to…

  2. The Link between Form and Meaning in American Sign Language: Lexical Processing Effects

    Science.gov (United States)

    Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella

    2009-01-01

    Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of…

  3. The Ecology of Language in Classrooms at a University in Eastern Ukraine

    Science.gov (United States)

    Tarnopolsky, Oleg B.; Goodman, Bridget A.

    2014-01-01

    Using an ecology of language framework, the purpose of this study was to examine the degree to which English as a medium of instruction (EMI) at a private university in eastern Ukraine allows for the use of Ukrainian, the state language, or Russian, the predominantly spoken language, in large cities in eastern Ukraine. Uses of English and Russian…

  4. Mapudungun According to Its Speakers: Mapuche Intellectuals and the Influence of Standard Language Ideology

    Science.gov (United States)

    Lagos, Cristián; Espinoza, Marco; Rojas, Darío

    2013-01-01

    In this paper, we analyse the cultural models (or folk theory of language) that the Mapuche intellectual elite have about Mapudungun, the native language of the Mapuche people still spoken today in Chile as the major minority language. Our theoretical frame is folk linguistics and studies of language ideology, but we have also taken an applied…

  5. The role of foreign and indigenous languages in primary schools ...

    African Journals Online (AJOL)

    This article investigates the use of English and other African languages in Kenyan primary schools. English is a .... For a long time, the issue of the medium of instruction, in especially primary schools, has persisted in spite of .... mother tongue, they use this language for spoken classroom interaction in order to bring about.

  6. Language-Building Activities and Interaction Variations with Mixed-Ability ESL University Learners in a Content-Based Course

    Science.gov (United States)

    Serna Dimas, Héctor Manuel; Ruíz Castellanos, Erika

    2014-01-01

    The preparation of both language-building activities and a variety of teacher/student interaction patterns increase both oral language participation and content learning in a course of manual therapy with mixed-language ability students. In this article, the researchers describe their collaboration in a content-based course in English with English…

  7. Internationally Adopted Children in the Early School Years: Relative Strengths and Weaknesses in Language Abilities

    Science.gov (United States)

    Glennen, Sharon

    2015-01-01

    Purpose: This study aimed to determine the relative strengths and weaknesses in language and verbal short-term memory abilities of school-age children who were adopted from Eastern Europe. Method: Children adopted between 1;0 and 4;11 (years;months) of age were assessed with the Clinical Evaluation of Language Fundamentals-Preschool, Second…

  8. Community health center provider and staff's Spanish language ability and cultural awareness.

    Science.gov (United States)

    Baig, Arshiya A; Benitez, Amanda; Locklin, Cara A; Campbell, Amanda; Schaefer, Cynthia T; Heuer, Loretta J; Lee, Sang Mee; Solomon, Marla C; Quinn, Michael T; Burnet, Deborah L; Chin, Marshall H

    2014-05-01

    Many community health center providers and staff care for Latinos with diabetes, but their Spanish language ability and awareness of Latino culture are unknown. We surveyed 512 Midwestern health center providers and staff who managed Latino patients with diabetes. Few respondents had high Spanish language (13%) or cultural awareness scores (22%). Of respondents who self-reported 76-100% of their patients were Latino, 48% had moderate/low Spanish language and 49% had moderate/low cultural competency scores. Among these respondents, 3% lacked access to interpreters and 27% had neither received cultural competency training nor had access to training. Among all respondents, Spanish skills and Latino cultural awareness were low. Respondents who saw a significant number of Latinos had good access to interpretation services but not cultural competency training. Improved Spanish-language skills and increased access to cultural competency training and Latino cultural knowledge are needed to provide linguistically and culturally tailored care to Latino patients.

  9. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

    Science.gov (United States)

    Koeritzer, Margaret A.; Rogers, Chad S.; Van Engen, Kristin J.; Peelle, Jonathan E.

    2018-01-01

    Purpose: The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. Method: We tested 30 young adults and 30 older adults. Participants heard lists of sentences in…

  10. A study of syllable codas in South African Sign Language

    African Journals Online (AJOL)

    Kate H

    A South African Sign Language Dictionary for Families with Young Deaf Children (SLED 2006) was used with permission ... Figure 1: Syllable structure of a CVC syllable in the word “bed”. In spoken languages .... often than not, there is a societal emphasis on 'fixing' a child's deafness and attempting to teach deaf children to ...

  11. Intonational Division of a Speech Flow in the Kazakh Language

    Science.gov (United States)

    Bazarbayeva, Zeynep M.; Zhalalova, Akshay M.; Ormakhanova, Yenlik N.; Ospangaziyeva, Nazgul B.; Karbozova, Bulbul D.

    2016-01-01

    The purpose of this research is to analyze the speech intonation of the French, Kazakh, English and Russian languages. The study considers intonation component functions (of melodics, duration, and intensity) in poetry and language spoken. It is defined that a set of prosodic means are used in order to convey the intonational specifics of sounding…

  12. Grammar of Kove: An Austronesian Language of the West New Britain Province, Papua New Guinea

    Science.gov (United States)

    Sato, Hiroko

    2013-01-01

    This dissertation is a descriptive grammar of Kove, an Austronesian language spoken in the West New Britain Province of Papua New Guinea. Kove is primarily spoken in 18 villages, including some on the small islands north of New Britain. There are about 9,000 people living in the area, but many are not fluent speakers of Kove. The dissertation…

  13. African languages — is the writing on the screen? | Bosch | Southern ...

    African Journals Online (AJOL)

    The trends emerging in the natural language processing (NLP) of African languages spoken in South Africa, are explored in order to determine whether research in and development of such NLP is keeping abreast of international developments. This is done by investigating the past, present and future of NLP of African ...

  14. Singing abilities in children with Specific Language Impairment (SLI

    Directory of Open Access Journals (Sweden)

    Sylvain eCLEMENT

    2015-04-01

    Full Text Available Specific Language impairment (SLI is a heritable neurodevelopmental disorder diagnosed when a child has difficulties learning to produce and/or understand speech for no apparent reason (Bishop et al., 2012. The verbal difficulties of children with SLI have been largely documented, and a growing number of studies suggest that these children may also have difficulties in processing non-verbal complex auditory stimuli (Brandt et al., 2012; Corriveau et al., 2007. In a recent study, we reported that a large proportion of children with SLI present deficits in music perception (Planchou et al, submitted. Little is known, however, about the singing abilities of children with SLI. In order to investigate whether or not the impairments in expressive language extend to the musical domain, we assessed singing abilities in 8 children with SLI and 15 children with Typical Language Development (TLD matched for age and non-verbal intelligence. To this aim, we designed a ludic activity consisting of two singing tasks: a pitch-matching and a melodic reproduction task. In the pitch-matching task, the children were requested to sing single notes. In the melodic reproduction task, children were asked to sing short melodies that were either familiar (FAM-SONG and FAM-TUNE conditions or unfamiliar (UNFAM-TUNE condition. The analysis showed that children with SLI were impaired in the pitch-matching task, with a mean pitch error of 250 cents (mean pitch error for children with TLD: 154 cents. In the melodic reproduction task, we asked 30 healthy adults to rate the quality of the sung productions of the children on a continuous rating scale. The results revealed that singing of children with SLI received lower mean ratings than the children with TLD. Our findings thus indicate that children with SLI showed impairments in musical production and are discussed in light of a general auditory-motor dysfunction in children with SLI.

  15. Morphological features of the neonatal brain support development of subsequent cognitive, language, and motor abilities.

    Science.gov (United States)

    Spann, Marisa N; Bansal, Ravi; Rosen, Tove S; Peterson, Bradley S

    2014-09-01

    Knowledge of the role of brain maturation in the development of cognitive abilities derives primarily from studies of school-age children to adults. Little is known about the morphological features of the neonatal brain that support the subsequent development of abilities in early childhood, when maturation of the brain and these abilities are the most dynamic. The goal of our study was to determine whether brain morphology during the neonatal period supports early cognitive development through 2 years of age. We correlated morphological features of the cerebral surface assessed using deformation-based measures (surface distances) of high-resolution MRI scans for 33 healthy neonates, scanned between the first to sixth week of postmenstrual life, with subsequent measures of their motor, language, and cognitive abilities at ages 6, 12, 18, and 24 months. We found that morphological features of the cerebral surface of the frontal, mesial prefrontal, temporal, and occipital regions correlated with subsequent motor scores, posterior parietal regions correlated with subsequent language scores, and temporal and occipital regions correlated with subsequent cognitive scores. Measures of the anterior and middle portions of the cingulate gyrus correlated with scores across all three domains of ability. Most of the significant findings were inverse correlations located bilaterally in the brain. The inverse correlations may suggest either that a more protracted morphological maturation or smaller local volumes of neonatal brain tissue supports better performance on measures of subsequent motor, language, and cognitive abilities throughout the first 2 years of postnatal life. The correlations of morphological measures of the cingulate with measures of performance across all domains of ability suggest that the cingulate supports a broad range of skills in infancy and early childhood, similar to its functions in older children and adults. Copyright © 2014 Wiley Periodicals, Inc.

  16. Bikol Dictionary. PALI Language Texts: Philippines.

    Science.gov (United States)

    Mintz, Malcolm W.

    The Bikol language of the Philippines, spoken in the southernmost peninsula of Luzon Island and extending into the island provinces of Catanduanes and Masbate, is presented in this bilingual dictionary. An introduction explains the Bikol alphabet, orthographic representation (including policies adopted in writing Spanish and English loan words),…

  17. The Peculiarities of the Adverbs Functioning of the Dialect Spoken in the v. Shevchenkove, Kiliya district, Odessa Region

    Directory of Open Access Journals (Sweden)

    Maryna Delyusto

    2013-08-01

    Full Text Available The article gives new evidence about the adverb as a part of the grammatical system of the Ukrainian steppe dialect spread in the area between the Danube and the Dniester rivers. The author proves that the grammatical system of the dialect spoken in the v. Shevchenkove, Kiliya district, Odessa region is determined by the historical development of the Ukrainian language rather than the influence of neighboring dialects.

  18. Phonological awareness: explicit instruction for young deaf and hard-of-hearing children.

    Science.gov (United States)

    Miller, Elizabeth M; Lederberg, Amy R; Easterbrooks, Susan R

    2013-04-01

    The goal of this study was to explore the development of spoken phonological awareness for deaf and hard-of-hearing children (DHH) with functional hearing (i.e., the ability to access spoken language through hearing). Teachers explicitly taught five preschoolers the phonological awareness skills of syllable segmentation, initial phoneme isolation, and rhyme discrimination in the context of a multifaceted emergent literacy intervention. Instruction occurred in settings where teachers used simultaneous communication or spoken language only. A multiple-baseline across skills design documented a functional relation between instruction and skill acquisition for those children who did not have the skills at baseline with one exception; one child did not meet criteria for syllable segmentation. These results were confirmed by changes on phonological awareness tests that were administered at the beginning and end of the school year. We found that DHH children who varied in primary communication mode, chronological age, and language ability all benefited from explicit instruction in phonological awareness.

  19. Language-driven anticipatory eye movements in virtual reality.

    Science.gov (United States)

    Eichert, Nicole; Peeters, David; Hagoort, Peter

    2018-06-01

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

  20. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  1. Aphasia, an acquired language disorder

    African Journals Online (AJOL)

    2009-10-11

    Oct 11, 2009 ... In this article we will review the types of aphasia, an approach to its diagnosis, aphasia subtypes, rehabilitation and prognosis. ... language processing in both the written and spoken forms.6 ... The angular gyrus (Brodman area 39) is located at the .... of his or her quality of life, emotional state, sense of well-.

  2. From language to society: An analysis of interpreting quality and the ...

    African Journals Online (AJOL)

    Since Zimbabwe was a British colony, colonial policies ensured the entrenchment of English as the language of sports, education, records and law. English is spoken mainly as a second or even third language by the majority of Zimbabweans. Even for those who speak English fluently, or with near fluency, the technical ...

  3. Syntactic and Story Structure Complexity in the Narratives of High- and Low-Language Ability Children with Autism Spectrum Disorder.

    Science.gov (United States)

    Peristeri, Eleni; Andreou, Maria; Tsimpli, Ianthi M

    2017-01-01

    Although language impairment is commonly associated with the autism spectrum disorder (ASD), the Diagnostic Statistical Manual no longer includes language impairment as a necessary component of an ASD diagnosis (American Psychiatric Association, 2013). However, children with ASD and no comorbid intellectual disability struggle with some aspects of language whose precise nature is still outstanding. Narratives have been extensively used as a tool to examine lexical and syntactic abilities, as well as pragmatic skills in children with ASD. This study contributes to this literature by investigating the narrative skills of 30 Greek-speaking children with ASD and normal non-verbal IQ, 16 with language skills in the upper end of the normal range (ASD-HL), and 14 in the lower end of the normal range (ASD-LL). The control group consisted of 15 age-matched typically-developing (TD) children. Narrative performance was measured in terms of both microstructural and macrostructural properties. Microstructural properties included lexical and syntactic measures of complexity such as subordinate vs. coordinate clauses and types of subordinate clauses. Macrostructure was measured in terms of the diversity in the use of internal state terms (ISTs) and story structure complexity, i.e., children's ability to produce important units of information that involve the setting, characters, events, and outcomes of the story, as well as the characters' thoughts and feelings. The findings demonstrate that high language ability and syntactic complexity pattern together in ASD children's narrative performance and that language ability compensates for autistic children's pragmatic deficit associated with the production of Theory of Mind-related ISTs. Nevertheless, both groups of children with ASD (high and low language ability) scored lower than the TD controls in the production of Theory of Mind-unrelated ISTs, modifier clauses and story structure complexity.

  4. Syntactic and Story Structure Complexity in the Narratives of High- and Low-Language Ability Children with Autism Spectrum Disorder

    Science.gov (United States)

    Peristeri, Eleni; Andreou, Maria; Tsimpli, Ianthi M.

    2017-01-01

    Although language impairment is commonly associated with the autism spectrum disorder (ASD), the Diagnostic Statistical Manual no longer includes language impairment as a necessary component of an ASD diagnosis (American Psychiatric Association, 2013). However, children with ASD and no comorbid intellectual disability struggle with some aspects of language whose precise nature is still outstanding. Narratives have been extensively used as a tool to examine lexical and syntactic abilities, as well as pragmatic skills in children with ASD. This study contributes to this literature by investigating the narrative skills of 30 Greek-speaking children with ASD and normal non-verbal IQ, 16 with language skills in the upper end of the normal range (ASD-HL), and 14 in the lower end of the normal range (ASD-LL). The control group consisted of 15 age-matched typically-developing (TD) children. Narrative performance was measured in terms of both microstructural and macrostructural properties. Microstructural properties included lexical and syntactic measures of complexity such as subordinate vs. coordinate clauses and types of subordinate clauses. Macrostructure was measured in terms of the diversity in the use of internal state terms (ISTs) and story structure complexity, i.e., children's ability to produce important units of information that involve the setting, characters, events, and outcomes of the story, as well as the characters' thoughts and feelings. The findings demonstrate that high language ability and syntactic complexity pattern together in ASD children's narrative performance and that language ability compensates for autistic children's pragmatic deficit associated with the production of Theory of Mind-related ISTs. Nevertheless, both groups of children with ASD (high and low language ability) scored lower than the TD controls in the production of Theory of Mind-unrelated ISTs, modifier clauses and story structure complexity. PMID:29209258

  5. Syntactic and Story Structure Complexity in the Narratives of High- and Low-Language Ability Children with Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Eleni Peristeri

    2017-11-01

    Full Text Available Although language impairment is commonly associated with the autism spectrum disorder (ASD, the Diagnostic Statistical Manual no longer includes language impairment as a necessary component of an ASD diagnosis (American Psychiatric Association, 2013. However, children with ASD and no comorbid intellectual disability struggle with some aspects of language whose precise nature is still outstanding. Narratives have been extensively used as a tool to examine lexical and syntactic abilities, as well as pragmatic skills in children with ASD. This study contributes to this literature by investigating the narrative skills of 30 Greek-speaking children with ASD and normal non-verbal IQ, 16 with language skills in the upper end of the normal range (ASD-HL, and 14 in the lower end of the normal range (ASD-LL. The control group consisted of 15 age-matched typically-developing (TD children. Narrative performance was measured in terms of both microstructural and macrostructural properties. Microstructural properties included lexical and syntactic measures of complexity such as subordinate vs. coordinate clauses and types of subordinate clauses. Macrostructure was measured in terms of the diversity in the use of internal state terms (ISTs and story structure complexity, i.e., children's ability to produce important units of information that involve the setting, characters, events, and outcomes of the story, as well as the characters' thoughts and feelings. The findings demonstrate that high language ability and syntactic complexity pattern together in ASD children's narrative performance and that language ability compensates for autistic children's pragmatic deficit associated with the production of Theory of Mind-related ISTs. Nevertheless, both groups of children with ASD (high and low language ability scored lower than the TD controls in the production of Theory of Mind-unrelated ISTs, modifier clauses and story structure complexity.

  6. Lexical diversity and omission errors as predictors of language ability in the narratives of sequential Spanish-English bilinguals: a cross-language comparison.

    Science.gov (United States)

    Jacobson, Peggy F; Walden, Patrick R

    2013-08-01

    This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Narrative samples were obtained from 48 bilingual children in both of their languages using the suggested narrative retell protocol and coding conventions as per Systematic Analysis of Language Transcripts (SALT; Miller & Iglesias, 2008) software. An additional lexical diversity measure, VocD, was also calculated. A series of logistical hierarchical regressions explored the utility of the number of different words, VocD statistic, and word and morpheme omissions in each language for predicting language status. Omission errors turned out to be the best predictors of bilingual language impairment at all ages, and this held true across languages. Although lexical diversity measures did not predict typical or atypical language status, the measures were significantly related to oral language proficiency in English and Spanish. The results underscore the significance of omission errors in bilingual language impairment while simultaneously revealing the limitations of lexical diversity measures as indicators of impairment. The relationship between lexical diversity and oral language proficiency highlights the importance of considering relative language proficiency in bilingual assessment.

  7. Phase transition in a sexual age-structured model of learning foreign languages

    OpenAIRE

    Schwammle, Veit

    2005-01-01

    The understanding of language competition helps us to predict extinction and survival of languages spoken by minorities. A simple agent-based model of a sexual population, based on the Penna model, is built in order to find out under which circumstances one language dominates other ones. This model considers that only young people learn foreign languages. The simulations show a first order phase transition where the ratio between the number of speakers of different languages is the order para...

  8. CROSSROADS BETWEEN EDUCATION POLICIES AND INDIGENOUS LANGUAGES MAINTENANCE IN ARGENTINA

    Directory of Open Access Journals (Sweden)

    Ana Carolina Hecht

    2010-06-01

    Full Text Available Process of language shift is explained by many researchers since linguistic and anthropological perspectives. This area focuses on the correlations between social processes and changes in systems of use of a language. This article aims to address these issues. In particular, we analyze the links between educational-linguistic policy and the maintenance of the languages spoken in Argentina. In doing so, we explore this field taking into account the linguistic and educational policies implemented about indigenous languages in Argentina.

  9. Key cognitive preconditions for the evolution of language.

    Science.gov (United States)

    Donald, Merlin

    2017-02-01

    Languages are socially constructed systems of expression, generated interactively in social networks, which can be assimilated by the individual brain as it develops. Languages co-evolved with culture, reflecting the changing complexity of human culture as it acquired the properties of a distributed cognitive system. Two key preconditions set the stage for the evolution of such cultures: a very general ability to rehearse and refine skills (evident early in hominin evolution in toolmaking), and the emergence of material culture as an external (to the brain) memory record that could retain and accumulate knowledge across generations. The ability to practice and rehearse skill provided immediate survival-related benefits in that it expanded the physical powers of early hominins, but the same adaptation also provided the imaginative substrate for a system of "mimetic" expression, such as found in ritual and pantomime, and in proto-words, which performed an expressive function somewhat like the home signs of deaf non-signers. The hominid brain continued to adapt to the increasing importance and complexity of culture as human interactions with material culture became more complex; above all, this entailed a gradual expansion in the integrative systems of the brain, especially those involved in the metacognitive supervision of self-performances. This supported a style of embodied mimetic imagination that improved the coordination of shared activities such as fire tending, but also in rituals and reciprocal mimetic games. The time-depth of this mimetic adaptation, and its role in both the construction and acquisition of languages, explains the importance of mimetic expression in the media, religion, and politics. Spoken language evolved out of voco-mimesis, and emerged long after the more basic abilities needed to refine skill and share intentions, probably coinciding with the common ancestor of sapient humans. Self-monitoring and self-supervised practice were necessary

  10. Instructional Benefits of Spoken Words: A Review of Cognitive Load Factors

    Science.gov (United States)

    Kalyuga, Slava

    2012-01-01

    Spoken words have always been an important component of traditional instruction. With the development of modern educational technology tools, spoken text more often replaces or supplements written or on-screen textual representations. However, there could be a cognitive load cost involved in this trend, as spoken words can have both benefits and…

  11. The Influence of Teacher Power on English Language Learners' Self-Perceptions of Learner Empowerment

    Science.gov (United States)

    Diaz, Abel; Cochran, Kathryn; Karlin, Nancy

    2016-01-01

    English language learners (ELL) are students with a primary language spoken other than English enrolled in U.S. educational settings. As ELL students take on the challenges of learning English and U.S. culture, they must also learn academic content. The expectation to succeed academically in a foreign culture and language, while learning to speak…

  12. When words fail us: insights into language processing from developmental and acquired disorders.

    Science.gov (United States)

    Bishop, Dorothy V M; Nation, Kate; Patterson, Karalyn

    2014-01-01

    Acquired disorders of language represent loss of previously acquired skills, usually with relatively specific impairments. In children with developmental disorders of language, we may also see selective impairment in some skills; but in this case, the acquisition of language or literacy is affected from the outset. Because systems for processing spoken and written language change as they develop, we should beware of drawing too close a parallel between developmental and acquired disorders. Nevertheless, comparisons between the two may yield new insights. A key feature of connectionist models simulating acquired disorders is the interaction of components of language processing with each other and with other cognitive domains. This kind of model might help make sense of patterns of comorbidity in developmental disorders. Meanwhile, the study of developmental disorders emphasizes learning and change in underlying representations, allowing us to study how heterogeneity in cognitive profile may relate not just to neurobiology but also to experience. Children with persistent language difficulties pose challenges both to our efforts at intervention and to theories of learning of written and spoken language. Future attention to learning in individuals with developmental and acquired disorders could be of both theoretical and applied value.

  13. Perceptual Training of Second-Language Vowels: Does Musical Ability Play a Role?

    Science.gov (United States)

    Ghaffarvand Mokari, Payam; Werner, Stefan

    2018-01-01

    The present study attempts to extend the research on the effects of phonetic training on the production and perception of second-language (L2) vowels. We also examined whether success in learning L2 vowels through high-variability intensive phonetic training is related to the learners' general musical abilities. Forty Azerbaijani learners of…

  14. METONYMY BASED ON CULTURAL BACKGROUND KNOWLEDGE AND PRAGMATIC INFERENCING: EVIDENCE FROM SPOKEN DISCOURSE

    Directory of Open Access Journals (Sweden)

    Arijana Krišković

    2009-01-01

    Full Text Available Th e characterization of metonymy as a conceptual tool for guiding inferencing in language has opened a new fi eld of study in cognitive linguistics and pragmatics. To appreciate the value of metonymy for pragmatic inferencing, metonymy should not be viewed as performing only its prototypical referential function. Metonymic mappings are operative in speech acts at the level of reference, predication, proposition and illocution. Th e aim of this paper is to study the role of metonymy in pragmatic inferencing in spoken discourse in televison interviews. Case analyses of authentic utterances classifi ed as illocutionary metonymies following the pragmatic typology of metonymic functions are presented. Th e inferencing processes are facilitated by metonymic connections existing between domains or subdomains in the same functional domain. It has been widely accepted by cognitive linguists that universal human knowledge and embodiment are essential for the interpretation of metonymy. Th is analysis points to the role of cultural background knowledge in understanding target meanings. All these aspects of metonymic connections are exploited in complex inferential processes in spoken discourse. In most cases, metaphoric mappings are also a part of utterance interpretation.

  15. Word order in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.

    2012-01-01

    The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple

  16. Language Control Abilities of Late Bilinguals

    Science.gov (United States)

    Festman, Julia

    2012-01-01

    Although all bilinguals encounter cross-language interference (CLI), some bilinguals are more susceptible to interference than others. Here, we report on language performance of late bilinguals (Russian/German) on two bilingual tasks (interview, verbal fluency), their language use and switching habits. The only between-group difference was CLI:…

  17. Real-Time Processing of ASL Signs: Delayed First Language Acquisition Affects Organization of the Mental Lexicon

    Science.gov (United States)

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2015-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…

  18. Web-based mini-games for language learning that support spoken interaction

    CSIR Research Space (South Africa)

    Strik, H

    2015-09-01

    Full Text Available The European ‘Lifelong Learning Programme’ (LLP) project ‘Games Online for Basic Language learning’ (GOBL) aimed to provide youths and adults wishing to improve their basic language skills access to materials for the development of communicative...

  19. Attention to spoken word planning: Chronometric and neuroimaging evidence

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,

  20. State-of-the-Art in the Development of the Lokono Language

    Science.gov (United States)

    Rybka, Konrad

    2015-01-01

    Lokono is a critically endangered Northern Arawakan language spoken in the pericoastal areas of the Guianas (Guyana, Suriname, French Guiana). Today, in every Lokono village there remains only a small number of elderly native speakers. However, in spite of the ongoing language loss, across the three Guianas as well as in the Netherlands, where a…

  1. Development of a spoken language identification system for South African languages

    CSIR Research Space (South Africa)

    Peché, M

    2009-12-01

    Full Text Available , and complicates the design of the system as a whole. Current benchmark results are established by the National Institute of Standards and Technology (NIST) Language Recognition Evaluation (LRE) [12]. Initially started in 1996, the next evaluation was in 2003..., Gunnar Evermann, Mark Gales, Thomas Hain, Dan Kershaw, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, Valtcho Valtchev, and Phil Woodland: “The HTK book. Revised for HTK version 3.3”, Online: http://htk.eng.cam.ac.uk/., 2005. [11] M.A. Zissman...

  2. Phonological reduplication in sign language: rules rule

    Directory of Open Access Journals (Sweden)

    Iris eBerent

    2014-06-01

    Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.

  3. language choice, code-switching and code- mixing in biase

    African Journals Online (AJOL)

    Ada

    Finance and Economic Planning, Cross River and Akwa ... See Table 1. Table 1: Indigenous Languages Spoken in Biase ... used in education, in business, in religion, in the media ... far back as the seventeenth (17th) century (King. 1844).

  4. Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2015-06-01

    Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.

  5. Early Social, Imitation, Play, and Language Abilities of Young Non-Autistic Siblings of Children with Autism

    OpenAIRE

    Toth, Karen; Dawson, Geraldine; Meltzoff, Andrew N.; Greenson, Jessica; Fein, Deborah

    2007-01-01

    Studies are needed to better understand the broad autism phenotype in young siblings of children with autism. Cognitive, adaptive, social, imitation, play, and language abilities were examined in 42 non-autistic siblings and 20 toddlers with no family history of autism, ages 18–27 months. Siblings, as a group, were below average in expressive language and composite IQ, had lower mean receptive language, adaptive behavior, and social communication skills, and used fewer words, distal gestures,...

  6. Theory of Mind and Language in Children with Cochlear Implants

    Science.gov (United States)

    Remmel, Ethan; Peters, Kimberly

    2009-01-01

    Thirty children with cochlear implants (CI children), age range 3-12 years, and 30 children with normal hearing (NH children), age range 4-6 years, were tested on theory of mind and language measures. The CI children showed little to no delay on either theory of mind, relative to the NH children, or spoken language, relative to hearing norms. The…

  7. Language-mediated visual orienting behavior in low and high literates

    Directory of Open Access Journals (Sweden)

    Falk eHuettig

    2011-10-01

    Full Text Available The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005 which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors. In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1. In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2 but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2, low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate

  8. Developing a tagset and tagger for the African languages of South ...

    African Journals Online (AJOL)

    annotations in the form of linguistic tags and annotations. That is, the annotations are used to direct the searches to specific grammatical and lexical phenomena in a corpus. In this article, we propose a corpus-based approach and a tagset to be used on a corpus of spoken language for the African languages of South Africa.

  9. Relationship between the linguistic environments and early bilingual language development of hearing children in deaf-parented families.

    Science.gov (United States)

    Kanto, Laura; Huttunen, Kerttu; Laakso, Marja-Leena

    2013-04-01

    We explored variation in the linguistic environments of hearing children of Deaf parents and how it was associated with their early bilingual language development. For that purpose we followed up the children's productive vocabulary (measured with the MCDI; MacArthur Communicative Development Inventory) and syntactic complexity (measured with the MLU10; mean length of the 10 longest utterances the child produced during videorecorded play sessions) in both Finnish Sign Language and spoken Finnish between the ages of 12 and 30 months. Additionally, we developed new methodology for describing the linguistic environments of the children (N = 10). Large variation was uncovered in both the amount and type of language input and language acquisition among the children. Language exposure and increases in productive vocabulary and syntactic complexity were interconnected. Language acquisition was found to be more dependent on the amount of exposure in sign language than in spoken language. This was judged to be related to the status of sign language as a minority language. The results are discussed in terms of parents' language choices, family dynamics in Deaf-parented families and optimal conditions for bilingual development.

  10. Resourcing speech-language pathologists to work with multilingual children.

    Science.gov (United States)

    McLeod, Sharynne

    2014-06-01

    Speech-language pathologists play important roles in supporting people to be competent communicators in the languages of their communities. However, with over 7000 languages spoken throughout the world and the majority of the global population being multilingual, there is often a mismatch between the languages spoken by children and families and their speech-language pathologists. This paper provides insights into service provision for multilingual children within an English-dominant country by viewing Australia's multilingual population as a microcosm of ethnolinguistic minorities. Recent population studies of Australian pre-school children show that their most common languages other than English are: Arabic, Cantonese, Vietnamese, Italian, Mandarin, Spanish, and Greek. Although 20.2% of services by Speech Pathology Australia members are offered in languages other than English, there is a mismatch between the language of the services and the languages of children within similar geographical communities. Australian speech-language pathologists typically use informal or English-based assessments and intervention tools with multilingual children. Thus, there is a need for accessible culturally and linguistically appropriate resources for working with multilingual children. Recent international collaborations have resulted in practical strategies to support speech-language pathologists during assessment, intervention, and collaboration with families, communities, and other professionals. The International Expert Panel on Multilingual Children's Speech was assembled to prepare a position paper to address issues faced by speech-language pathologists when working with multilingual populations. The Multilingual Children's Speech website ( http://www.csu.edu.au/research/multilingual-speech ) addresses one of the aims of the position paper by providing free resources and information for speech-language pathologists about more than 45 languages. These international

  11. A geographical analysis of speech-language pathology services to support multilingual children.

    Science.gov (United States)

    Verdon, Sarah; McLeod, Sharynne; McDonald, Simon

    2014-06-01

    The speech-language pathology workforce strives to provide equitable, quality services to multilingual people. However, the extent to which this is being achieved is unknown. Participants in this study were 2849 members of Speech Pathology Australia and 4386 children in the Birth cohort of the Longitudinal Study of Australian Children (LSAC). Statistical and geospatial analyses were undertaken to identify the linguistic diversity and geographical distribution of Australian speech-language pathology services and Australian children. One fifth of services offered by Speech Pathology Australia members (20.2%) were available in a language other than English. Services were most commonly offered in Australian Sign Language (Auslan) (4.3%), French (3.1%), Italian (2.2%), Greek (1.6%), and Cantonese (1.5%). Among 4-5-year-old children in the nationally representative LSAC, 15.3% regularly spoke and/or understood a language other than English. The most common languages spoken by the children were Arabic (1.5%), Italian (1.2%), Greek (0.9%), Spanish (0.9%), and Vietnamese (0.9%). There was a mismatch between the location of and languages in which multilingual services were offered, and the location of and languages spoken by children. These findings highlight the need for SLPs to be culturally competent in providing equitable services to all clients, regardless of the languages they speak.

  12. Assessment of Dyslexia in the Urdu Language

    NARCIS (Netherlands)

    Haidry, Sana

    2017-01-01

    Urdu is spoken by more than 500 million people around the world but still is an under-researched language. The studies presented in this thesis focus on typical and poor literacy development in Urdu-speaking children during early reading acquisition. In the first study, we developed and validated a

  13. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  14. The socially weighted encoding of spoken words: a dual-route approach to speech perception.

    Science.gov (United States)

    Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B

    2013-01-01

    Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  15. Community Health Center Provider and Staff’s Spanish Language Ability and Cultural Awareness

    Science.gov (United States)

    Baig, Arshiya A.; Benitez, Amanda; Locklin, Cara A.; Campbell, Amanda; Schaefer, Cynthia T.; Heuer, Loretta J.; Mee Lee, Sang; Solomon, Marla C.; Quinn, Michael T.; Burnet, Deborah L.; Chin, Marshall H.

    2014-01-01

    Many community health center providers and staff care for Latinos with diabetes, but their Spanish language ability and awareness of Latino culture are unknown. We surveyed 512 Midwestern health center providers and staff who managed Latino patients with diabetes. Few respondents had high Spanish language (13%) or cultural awareness scores (22%). Of respondents who self-reported 76–100% of their patients were Latino, 48% had moderate/low Spanish language and 49% had moderate/low cultural competency scores. Among these respondents, 3% lacked access to interpreters and 27% had neither received cultural competency training nor had access to training. Among all respondents, Spanish skills and Latino cultural awareness were low. Respondents who saw a significant number of Latinos had good access to interpretation services but not cultural competency training. Improved Spanish-language skills and increased access to cultural competency training and Latino cultural knowledge are needed to provide linguistically and culturally tailored care to Latino patients. PMID:24858866

  16. The contribution of phonological knowledge, memory, and language background to reading comprehension in deaf populations

    Directory of Open Access Journals (Sweden)

    Elizabeth Ann Hirshorn

    2015-08-01

    Full Text Available While reading is challenging for many deaf individuals, some become proficient readers. Yet we do not know the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English’, in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as verbal short-term memory and long-term memory skills, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with long-term memory, as measured by free recall, being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers.

  17. The Effects of Captions on EFL Learners' Comprehension of English-Language Television Programs

    Science.gov (United States)

    Rodgers, Michael P. H.; Webb, Stuart

    2017-01-01

    The Multimedia Principle (Fletcher & Tobias, 2005) states that people learn better and comprehend more when words and pictures are presented together. The potential for English language learners to increase their comprehension of video through the use of captions, which graphically display the same language as the spoken dialogue, has been…

  18. Attainment of students’ conception in magnetic fields by using of direct observation and symbolic language ability

    Science.gov (United States)

    Desy Fatmaryanti, Siska; Suparmi; Sarwanto; Ashadi

    2017-11-01

    This study focuses on description attainment of students’ conception in the magnetic field. The conception was based by using of direct observation and symbolic language ability. The method used is descriptive quantitative research. The subject of study was about 86 students from 3 senior high school at Purworejo. The learning process was done by guided inquiry model. During the learning, students were required to actively investigate the concept of a magnetic field around a straight wire electrical current Data retrieval was performed using an instrument in the form of a multiple choice test reasoned and observation during the learning process. There was four indicator of direct observation ability and four indicators of symbolic language ability to grouping category of students conception. The results of average score showed that students conception about the magnitude more better than the direction of magnetic fields in view of symbolic language. From the observation, we found that students could draw the magnetic fields line not from a text book but their direct observation results. They used various way to get a good accuracy of observation results. Explicit recommendations are presented in the discussion section at the end of this paper.

  19. Comparison of Word Intelligibility in Spoken and Sung Phrases

    Directory of Open Access Journals (Sweden)

    Lauren B. Collister

    2008-09-01

    Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.

  20. Youth Citizenship at the End of Primary School: The Role of Language Ability

    Science.gov (United States)

    Eidhof, Bram B. F.; ten Dam, Geert T. M.; Dijkstra, A. B.; van de Werfhorst, H. G.

    2017-01-01

    Schools are expected to fulfil different types of goals, including citizenship development. An important question is to what extent schools can simultaneously promote different learning outcomes. In this paper, we investigate the relationship between language ability and youth citizenship. Using a representative sample of 2429 grade 6 pupils (age…

  1. Exchange students' motivations and language learning success

    DEFF Research Database (Denmark)

    Caudery, Tim; Petersen, Margrethe; Shaw, Philip

    One point investigated in our research project on the linguistic experiences of exchange students in Denmark and Sweden is the reasons students have for coming on exchange. Traditionally, an important goal of student exchange was to acquire improved language skills usually in the language spoken...... in the host country. To what extent is this true when students plan to study in English in a non-English speaking country? Do they hope and expect to improve their English skills, their knowledge of the local language, both, or neither? to what extent are these expectations fulfilled? Results form the project...

  2. Parental mode of communication is essential for speech and language outcomes in cochlear implanted children

    DEFF Research Database (Denmark)

    Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina

    2010-01-01

    The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...

  3. Attention demands of spoken word planning: A review

    NARCIS (Netherlands)

    Roelofs, A.P.A.; Piai, V.

    2011-01-01

    Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot

  4. LANGUAGE PLANNING IN DIASPORA: THE CASE OF KURDISH KURMANJI DIALECT

    Directory of Open Access Journals (Sweden)

    Salih Akin

    2011-01-01

    Full Text Available In this paper, we study a particular case of language planning in Diaspora through the activities of the Committee for Standardization of Kurdish Kurmanji dialect spoken by the majority of Kurds living in Turkey, in Syria and by part of the Kurds living in Iran and in Iraq. Despite its sizeable speaker community,Kurmanji is not officially recognized and public education is not provided in this dialect in the countries where it is spoken. The absence of official recognition and structural variation within Kurmanji led Kurdish intellectuals and researchers living in exile to form the Committee in 1987. Holding two meetings per year in a European city, the Committee tries to standardize and to revitalize the Kurmanji dialect without relying on government support. We examine the activities of the committee in the light of its research in the field of language policy and planning. The activities will be assessed by three typologies of language planning: 1 Haugen’s classical model of language planning (1991 [1983]; 2 Hornberger’s integrative framework of language planning (1988; 3 Nahir’s Language Planning Goals (2000. Our contribution focuses on two aspects of the activities: corpus planning and dissemination of results in exile. We study the practices of collection of vocabulary and neology in different scientific domains as well as the influences of these activities on the development of Kurmanji.

  5. Language modeling for automatic speech recognition of inflective languages an applications-oriented approach using lexical data

    CERN Document Server

    Donaj, Gregor

    2017-01-01

    This book covers language modeling and automatic speech recognition for inflective languages (e.g. Slavic languages), which represent roughly half of the languages spoken in Europe. These languages do not perform as well as English in speech recognition systems and it is therefore harder to develop an application with sufficient quality for the end user. The authors describe the most important language features for the development of a speech recognition system. This is then presented through the analysis of errors in the system and the development of language models and their inclusion in speech recognition systems, which specifically address the errors that are relevant for targeted applications. The error analysis is done with regard to morphological characteristics of the word in the recognized sentences. The book is oriented towards speech recognition with large vocabularies and continuous and even spontaneous speech. Today such applications work with a rather small number of languages compared to the nu...

  6. Modality differences between written and spoken story retelling in healthy older adults

    Directory of Open Access Journals (Sweden)

    Jessica Ann Obermeyer

    2015-04-01

    Methods: Ten native English speaking healthy elderly participants between the ages of 50 and 80 were recruited. Exclusionary criteria included neurological disease/injury, history of learning disability, uncorrected hearing or vision impairment, history of drug/alcohol abuse and presence of cognitive decline (based on Cognitive Linguistic Quick Test. Spoken and written discourse was analyzed for micro linguistic measures including total words, percent correct information units (CIUs; Nicholas & Brookshire, 1993 and percent complete utterances (CUs; Edmonds, et al. 2009. CIUs measure relevant and informative words while CUs focus at the sentence level and measure whether a relevant subject and verb and object (if appropriate are present. Results: Analysis was completed using Wilcoxon Rank Sum Test due to small sample size. Preliminary results revealed that healthy elderly people produced significantly more words in spoken retellings than written retellings (p=.000; however, this measure contrasted with %CIUs and %CUs with participants producing significantly higher %CIUs (p=.000 and %CUs (p=.000 in written story retellings than in spoken story retellings. Conclusion: These findings indicate that written retellings, while shorter, contained higher accuracy at both a word (CIU and sentence (CU level. This observation could be related to the ability to revise written text and therefore make it more concise, whereas the nature of speech results in more embellishment and “thinking out loud,” such as comments about the task, associated observations about the story, etc. We plan to run more participants and conduct a main concepts analysis (before conference time to gain more insight into modality differences and implications.

  7. Language and reading development in the brain today: neuromarkers and the case for prediction.

    Science.gov (United States)

    Buchweitz, Augusto

    2016-01-01

    The goal of this article is to provide an account of language development in the brain using the new information about brain function gleaned from cognitive neuroscience. This account goes beyond describing the association between language and specific brain areas to advocate the possibility of predicting language outcomes using brain-imaging data. The goal is to address the current evidence about language development in the brain and prediction of language outcomes. Recent studies will be discussed in the light of the evidence generated for predicting language outcomes and using new methods of analysis of brain data. The present account of brain behavior will address: (1) the development of a hardwired brain circuit for spoken language; (2) the neural adaptation that follows reading instruction and fosters the "grafting" of visual processing areas of the brain onto the hardwired circuit of spoken language; and (3) the prediction of language development and the possibility of translational neuroscience. Brain imaging has allowed for the identification of neural indices (neuromarkers) that reflect typical and atypical language development; the possibility of predicting risk for language disorders has emerged. A mandate to develop a bridge between neuroscience and health and cognition-related outcomes may pave the way for translational neuroscience. Copyright © 2016 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  8. Impact of family language and testing language on reading performance in a bilingual educational context.

    Science.gov (United States)

    Elosua Oliden, Paula; Mujika Lizaso, Josu

    2014-01-01

    When different languages co-exist in one area, or when one person speaks more than one language, the impact of language on psychological and educational assessment processes can be considerable. The aim of this work was to study the impact of testing language in a community with two official languages: Spanish and Basque. By taking the PISA 2009 Reading Comprehension Test as a basis for analysis, four linguistic groups were defined according to the language spoken at home and the test language. Psychometric equivalence between test forms and differences in results among the four language groups were analyzed. The comparison of competence means took into account the effects of the index of socioeconomic and cultural status (ISEC) and gender. One reading unit with differential item functioning was detected. The reading competence means were considerably higher in the monolingual Spanish-Spanish group. No differences were found between the language groups based on family language when the test was conducted in Basque. The study illustrates the importance of taking into account psychometric, linguistic and sociolinguistic factors in linguistically diverse assessment contexts.

  9. Enhancing Language Material Availability Using Computers.

    Science.gov (United States)

    Miyashita, Mizuki; Moll, Laura A.

    This paper describes the use of computer technology to produce an updated online Tohono O'odham dictionary. Spoken in southern Arizona and northern Mexico, Tohono O'odham (formerly Papago) and its close relative Akimel O'odham (Pima) had a total of about 25,000 speakers in 1988. Although the language is taught to school children through community…

  10. RAPPORT-BUILDING THROUGH CALL IN TEACHING CHINESE AS A FOREIGN LANGUAGE: AN EXPLORATORY STUDY

    Directory of Open Access Journals (Sweden)

    Wenying Jiang

    2005-05-01

    Full Text Available Technological advances have brought about the ever-increasing utilisation of computer-assisted language learning (CALL media in the learning of a second language (L2. Computer-mediated communication, for example, provides a practical means for extending the learning of spoken language, a challenging process in tonal languages such as Chinese, beyond the realms of the classroom. In order to effectively improve spoken language competency, however, CALL applications must also reproduce the social interaction that lies at the heart of language learning and language use. This study draws on data obtained from the utilisation of CALL in the learning of L2 Chinese to explore whether this medium can be used to extend opportunities for rapport-building in language teaching beyond the face-to-face interaction of the classroom. Rapport's importance lies in its potential to enhance learning, motivate learners, and reduce learner anxiety. To date, CALL's potential in relation to this facet of social interaction remains a neglected area of research. The results of this exploratory study suggest that CALL may help foster learner-teacher rapport and that scaffolding, such as strategically composing rapport-fostering questions in sound-files, is conducive to this outcome. The study provides an instruction model for this application of CALL.

  11. Loops of Spoken Language i Danish Broadcasting Corporation News

    DEFF Research Database (Denmark)

    le Fevre Jakobsen, Bjarne

    2012-01-01

    The tempo of Danish television news broadcasts has changed markedly over the past 40 years, while the language has essentially always been conservative, and remains so today. The development in the tempo of the broadcasts has gone through a number of phases from a newsreader in a rigid structure...

  12. On language acquisition in speech and sign:development drives combinatorial structure in both modalities

    Directory of Open Access Journals (Sweden)

    Gary eMorgan

    2014-11-01

    Full Text Available Languages are composed of a conventionalized system of parts which allow speakers and signers to compose an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: 'how do I make a language with combinatorial structure’? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.

  13. Reliability of the Dutch-language version of the Communication Function Classification System and its association with language comprehension and method of communication.

    Science.gov (United States)

    Vander Zwart, Karlijn E; Geytenbeek, Joke J; de Kleijn, Maaike; Oostrom, Kim J; Gorter, Jan Willem; Hidecker, Mary Jo Cooley; Vermeulen, R Jeroen

    2016-02-01

    The aims of this study were to determine the intra- and interrater reliability of the Dutch-language version of the Communication Function Classification System (CFCS-NL) and to investigate the association between the CFCS level and (1) spoken language comprehension and (2) preferred method of communication in children with cerebral palsy (CP). Participants were 93 children with CP (50 males, 43 females; mean age 7y, SD 2y 6mo, range 2y 9mo-12y 10mo; unilateral spastic [n=22], bilateral spastic [n=51], dyskinetic [n=15], ataxic [n=3], not specified [n=2]; Gross Motor Function Classification System level I [n=16], II [n=14], III, [n=7], IV [n=24], V [n=31], unknown [n=1]), recruited from rehabilitation centres throughout the Netherlands. Because some centres only contributed to part of the study, different numbers of participants are presented for different aspects of the study. Parents and speech and language therapists (SLTs) classified the communication level using the CFCS. Kappa was used to determine the intra- and interrater reliability. Spearman's correlation coefficient was used to determine the association between CFCS level and spoken language comprehension, and Fisher's exact test was used to examine the association between the CFCS level and method of communication. Interrater reliability of the CFCS-NL between parents and SLTs was fair (r=0.54), between SLTs good (r=0.78), and the intrarater (SLT) reliability very good (r=0.85). The association between the CFCS and spoken language comprehension was strong for SLTs (r=0.63) and moderate for parents (r=0.51). There was a statistically significant difference between the CFCS level and the preferred method of communication of the child (pcommunication in children with CP. Preferably, professionals should classify the child's CFCS level in collaboration with the parents to acquire the most comprehensive information about the everyday communication of the child in various situations both with familiar and

  14. American or British? L2 Speakers' Recognition and Evaluations of Accent Features in English

    Science.gov (United States)

    Carrie, Erin; McKenzie, Robert M.

    2018-01-01

    Recent language attitude research has attended to the processes involved in identifying and evaluating spoken language varieties. This article investigates the ability of second-language learners of English in Spain (N = 71) to identify Received Pronunciation (RP) and General American (GenAm) speech and their perceptions of linguistic variation…

  15. Thoughts about Central Andean Formative Languages and Societies

    OpenAIRE

    Kaulicke, Peter

    2012-01-01

    This paper deals with the general problem of the Formative Period and presents a proposal for subdivision based upon characterizations of material cultures and their distributions as interaction spheres and traditions. These reflect significant changes that may be related to changes in the mechanisms of language dispersal. It hypothesizes that a pre-protomochica was spoken in northern Perú; that multilingualism prevailed at the site of Chavín site; and that different languages existed in the ...

  16. Language Impairments in the Development of Sign: Do They Reside in a Specific Modality or Are They Modality-Independent Deficits?

    Science.gov (United States)

    Woll, Bencie; Morgan, Gary

    2012-01-01

    Various theories of developmental language impairments have sought to explain these impairments in modality-specific ways--for example, that the language deficits in SLI or Down syndrome arise from impairments in auditory processing. Studies of signers with language impairments, especially those who are bilingual in a spoken language as well as a…

  17. Finding Relevant Data in a Sea of Languages

    Science.gov (United States)

    2016-04-26

    and information retrieval to automate language processing tasks so that the limited number of linguists available for analyzing text and spoken...HLT literature that a research team would do a study and stop short,” says Williams. “Each study was trying to solve a very specific problem. No

  18. Interaction and common ground in dementia: Communication across linguistic and cultural diversity in a residential dementia care setting.

    Science.gov (United States)

    Strandroos, Lisa; Antelius, Eleonor

    2017-09-01

    Previous research concerning bilingual people with a dementia disease has mainly focused on the importance of sharing a spoken language with caregivers. While acknowledging this, this article addresses the multidimensional character of communication and interaction. As using spoken language is made difficult as a consequence of the dementia disease, this multidimensionality becomes particularly important. The article is based on a qualitative analysis of ethnographic fieldwork at a dementia care facility. It presents ethnographic examples of different communicative forms, with particular focus on bilingual interactions. Interaction is understood as a collective and collaborative activity. The text finds that a shared spoken language is advantageous, but is not the only source of, nor a guarantee for, creating common ground and understanding. Communicative resources other than spoken language are for example body language, embodiment, artefacts and time. Furthermore, forms of communication are not static but develop, change and are created over time. Ability to communicate is thus not something that one has or has not, but is situationally and collaboratively created. To facilitate this, time and familiarity are central resources, and the results indicate the importance of continuity in interpersonal relations.

  19. From spoken narratives to domain knowledge: mining linguistic data for medical image understanding.

    Science.gov (United States)

    Guo, Xuan; Yu, Qi; Alm, Cecilia Ovesdotter; Calvelli, Cara; Pelz, Jeff B; Shi, Pengcheng; Haake, Anne R

    2014-10-01

    Extracting useful visual clues from medical images allowing accurate diagnoses requires physicians' domain knowledge acquired through years of systematic study and clinical training. This is especially true in the dermatology domain, a medical specialty that requires physicians to have image inspection experience. Automating or at least aiding such efforts requires understanding physicians' reasoning processes and their use of domain knowledge. Mining physicians' references to medical concepts in narratives during image-based diagnosis of a disease is an interesting research topic that can help reveal experts' reasoning processes. It can also be a useful resource to assist with design of information technologies for image use and for image case-based medical education systems. We collected data for analyzing physicians' diagnostic reasoning processes by conducting an experiment that recorded their spoken descriptions during inspection of dermatology images. In this paper we focus on the benefit of physicians' spoken descriptions and provide a general workflow for mining medical domain knowledge based on linguistic data from these narratives. The challenge of a medical image case can influence the accuracy of the diagnosis as well as how physicians pursue the diagnostic process. Accordingly, we define two lexical metrics for physicians' narratives--lexical consensus score and top N relatedness score--and evaluate their usefulness by assessing the diagnostic challenge levels of corresponding medical images. We also report on clustering medical images based on anchor concepts obtained from physicians' medical term usage. These analyses are based on physicians' spoken narratives that have been preprocessed by incorporating the Unified Medical Language System for detecting medical concepts. The image rankings based on lexical consensus score and on top 1 relatedness score are well correlated with those based on challenge levels (Spearman correlation>0.5 and Kendall

  20. Delay or deficit? Spelling processes in children with specific language impairment.

    Science.gov (United States)

    Larkin, Rebecca F; Williams, Gareth J; Blaggan, Samarita

    2013-01-01

    Few studies have explored the phonological, morphological and orthographic spellings skills of children with specific language impairment (SLI) simultaneously. Fifteen children with SLI (mean age=113.07 months, SD=8.61) completed language and spelling tasks alongside chronological-age controls and spelling-age controls. While the children with SLI showed a deficit in phonological spelling, they performed comparably to spelling-age controls on morphological spelling skills, and there were no differences between the three groups in producing orthographically legal spellings. The results also highlighted the potential importance of adequate non-word repetition skills in relation to effective spelling skills, and demonstrated that not all children with spoken language impairments show marked spelling difficulties. Findings are discussed in relation to theory, educational assessment and practice. As a result of this activity, readers will describe components of spoken language that predict children's morphological and phonological spelling performance. As a result of this activity, readers will describe how the spelling skills of children with SLI compare to age-matched and spelling age-matched control children. Readers will be able to interpret the variability in spelling performance seen in children with SLI. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Language comprehension in ape and child.

    Science.gov (United States)

    Savage-Rumbaugh, E S; Murphy, J; Sevcik, R A; Brakke, K E; Williams, S L; Rumbaugh, D M

    1993-01-01

    Previous investigations of the linguistic capacities of apes have focused on the ape's ability to produce words, and there has been little concern for comprehension. By contrast, it is increasingly recognized that comprehension precedes production in the language development of normal human children, and it may indeed guide production. It has been demonstrated that some species can process speech sounds categorically in a manner similar to that observed in humans. Consequently, it should be possible for such species to comprehend language if they have the cognitive capacity to understand word-referent relations and syntactic structure. Popular theories of human language acquisition suggest that the ability to process syntactic information is unique to humans and reflects a novel biological adaptation not seen in other animals. The current report addresses this issue through systematic experimental comparisons of the language comprehension skills of a 2-year-old child and an 8 year-old bonobo (Pan paniscus) who was raised in a language environment similar to that in which children are raised but specifically modified to be appropriate for an ape. Both subjects (child and bonobo) were exposed to spoken English and lexigrams from infancy, and neither was trained to comprehend speech. A common caretaker participated in the rearing of both subjects. All language acquisition was through observational learning. Without prior training, subjects were asked to respond to the same 660 novel sentences. All responses were videotaped and scored for accuracy of comprehension of the English language. The results indicated that both subjects comprehended novel requests and simple syntactic devices. The bonobo decoded the syntactic device of word recursion with higher accuracy than the child; however, the child tended to do better than the bonobo on the conjunctive, a structure that places a greater burden on short-term memory. Both subjects performed as well on sentences that

  2. Semantic abilities in children with pragmatic language impairment: the case of picture naming skills

    NARCIS (Netherlands)

    Ketelaars, M.P.; Hermans, S.I.A.; Cuperus, J.; Jansonius, K.; Verhoeven, L.

    2011-01-01

    Purpose: The semantic abilities of children with pragmatic language impairment (PLI) are subject to debate. The authors investigated picture naming and definition skills in 5-year-olds with PLI in comparison to typically developing children. Method: 84 children with PLI and 80 age-matched typically

  3. Growth of reading skills in children with a history of specific language impairment: the role of autistic symptomatology and language-related abilities.

    Science.gov (United States)

    St Clair, Michelle C; Durkin, Kevin; Conti-Ramsden, Gina; Pickles, Andrew

    2010-03-01

    Individuals with a history of specific language impairment (SLI) often have subsequent problems with reading skills, but there have been some discrepant findings as to the developmental time course of these skills. This study investigates the developmental trajectories of reading skills over a 9-year time-span (from 7 to 16 years of age) in a large sample of individuals with a history of SLI. Relationships among reading skills, autistic symptomatology, and language-related abilities were also investigated. The results indicate that both reading accuracy and comprehension are deficient but that the development of these skills progresses in a consistently parallel fashion to what would be expected from a normative sample of same age peers. Language-related abilities were strongly associated with reading skills. Unlike individuals with SLI only, those with SLI and additional autistic symptomatology had adequate reading accuracy but did not differ from the individuals with SLI only in reading comprehension. They exhibited a significant gap between what they could read and what they could understand when reading. These findings provide strong evidence that individuals with SLI experience continued, long-term deficits in reading skills from childhood to adolescence.

  4. A descriptive analysis of language and speech skills in 4- to 5-yr-old children with hearing loss.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Crawford, Leah; Ni, Andy; Durieux-Smith, Andrée

    2011-01-01

    scores were quite variable for children with severe and profound hearing loss. Factors influencing performance in children with hearing loss included degree of hearing loss (pure-tone average) and parent education. Age at diagnosis of hearing loss was not a significant predictor of speech-language outcomes in this study. Results indicated that overall, children with all degrees of hearing loss who were fit with hearing technology and who received auditory-based rehabilitation services during the preschool years demonstrated the potential to develop spoken language communication skills. As a group, children with CIs and children with HAs did not differ significantly on language abilities although there were differences in articulation skills. Their performance at age 4 to 5 yrs was delayed compared with a group of hearing peers. The findings reinforce the need for research to identify factors that are likely to lead to age-appropriate communication skills for preschool-age children with hearing loss.

  5. Functional Near-Infrared Spectroscopy Brain Imaging Investigation of Phonological Awareness and Passage Comprehension Abilities in Adult Recipients of Cochlear Implants

    Science.gov (United States)

    Bisconti, Silvia; Shulkin, Masha; Hu, Xiaosu; Basura, Gregory J.; Kileny, Paul R.; Kovelman, Ioulia

    2016-01-01

    Purpose: The aim of this study was to examine how the brains of individuals with cochlear implants (CIs) respond to spoken language tasks that underlie successful language acquisition and processing. Method: During functional near-infrared spectroscopy imaging, CI recipients with hearing impairment (n = 10, mean age: 52.7 ± 17.3 years) and…

  6. A Transcription Scheme for Languages Employing the Arabic Script Motivated by Speech Processing Application

    National Research Council Canada - National Science Library

    Ganjavi, Shadi; Georgiou, Panayiotis G; Narayanan, Shrikanth

    2004-01-01

    ... (The DARPA Babylon Program; Narayanan, 2003). In this paper, we discuss transcription systems needed for automated spoken language processing applications in Persian that uses the Arabic script for writing...

  7. Enhancing Foreign Language Learning through Listening Strategies Delivered in L1: An Experimental Study

    Science.gov (United States)

    Bozorgian, Hossein; Pillay, Hitendra

    2013-01-01

    Listening used in language teaching refers to a complex process that allows us to understand spoken language. The current study, conducted in Iran with an experimental design, investigated the effectiveness of teaching listening strategies delivered in L1 (Persian) and its effect on listening comprehension in L2. Five listening strategies:…

  8. One Language for the United States? (Un Idioma para Los Estados Unidos?) CSG Backgrounder.

    Science.gov (United States)

    Ford, Mark L.

    The United States has become increasingly multilingual in recent decades, and while English is the most commonly spoken language, almost 11 percent of Americans prefer to speak another language at home. Bilingualism is promoted by governmental units at the federal, state, and local levels through a variety of programs, particularly in education…

  9. Young children's family history of stuttering and their articulation, language and attentional abilities: An exploratory study.

    Science.gov (United States)

    Choi, Dahye; Conture, Edward G; Tumanova, Victoria; Clark, Chagit E; Walden, Tedra A; Jones, Robin M

    The purpose of this study was to determine whether young children who do (CWS) and do not stutter (CWNS) with a positive versus negative family history of stuttering differ in articulation, language and attentional abilities and family histories of articulation, language and attention related disorders. Participants were 25 young CWS and 50 young CWNS. All 75 participants' caregivers consistently reported a positive or negative family history of stuttering across three consecutive time points that were about 8 months apart for a total of approximately 16 months. Each participant's family history focused on the same, relatively limited number of generations (i.e., participants' parents & siblings). Children's family history of stuttering as well as articulation, language, and attention related disorders was obtained from one or two caregivers during an extensive interview. Children's speech and language abilities were measured using four standardized articulation and language tests and their attentional abilities were measured using caregiver reports of temperament. Findings indicated that (1) most caregivers (81.5% or 75 out 92) were consistent in their reporting of positive or negative history of stuttering; (2) CWNS with a positive family history of stuttering, compared to those with a negative family history of stuttering, were more likely to have reported a positive family history of attention deficit/hyperactivity disorder (ADHD), and (3) CWNS with a positive family history of stuttering had lower language scores than those with a negative family history of stuttering. However, there were no such significant differences in family histories of ADHD and language scores for CWS with a positive versus negative family history of stuttering. In addition, although 24% of CWS versus 12% of CWNS's caregivers reported a positive family history of stuttering, inferential analyses indicated no significant differences between CWS and CWNS in relative proportions of family

  10. Language, Space, Power: Reflections on Linguistic and Spatial Turns in Urban Research

    DEFF Research Database (Denmark)

    Vuolteenaho, Jani; Ameel, Lieven; Newby, Andrew

    2012-01-01

    to conceptualise the power-embeddedness of urban spaces, processes and identities. More recently, however, the ramifications of the linguistic turn across urban research have proliferated as a result of approaches in which specific place-bound language practices and language-based representations about cities have......) and thematic interests (from place naming to interactional uses of spoken language) that have been significant channels in re-directing urban scholars’ attention to the concrete workings of language. As regards the spatial turn, we highlight the relevance of the connectivity-, territoriality-, attachment...

  11. Language development in deaf children’s interactions with deaf and hearing adults. A Dutch longitudinal study

    NARCIS (Netherlands)

    Klatter-Folmer, H.A.K.; Hout, R.W.N.M. van; Kolen, E.; Verhoeven, L.T.W.

    2006-01-01

    The language development of two deaf girls and four deaf boys in Sign Language of the Netherlands (SLN) and spoken Dutch was investigated longitudinally. At the start, the mean age of the children was 3;5. All data were collected in video-recorded semistructured conversations between individual

  12. A Bibliography of English as a Second Language Materials: Grades K-3.

    Science.gov (United States)

    National Clearinghouse for Bilingual Education, Arlington, VA.

    This annotated bibliography of English as a second language (ESL) materials for grades K-3 is divided into four parts. The first part, ESL texts, lists a number of series or single texts that are designed to teach the spoken language and reading to the elementary school child. The second part is a list of readers that, although were mostly…

  13. From Monologue to Dialogue: Natural Language Generation in OVIS

    NARCIS (Netherlands)

    Theune, Mariet; Freedman, R.; Callaway, C.

    This paper describes how a language generation system that was originally designed for monologue generation, has been adapted for use in the OVIS spoken dialogue system. To meet the requirement that in a dialogue, the system’s utterances should make up a single, coherent dialogue turn, several

  14. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

    Directory of Open Access Journals (Sweden)

    Dhana Wolf

    2017-11-01

    Full Text Available Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake or less so (e.g., self-grooming. We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area and the posterior superior temporal gyrus (pSTG, Wernicke's area and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC in fMRI even without involving a stimulus (model-free analysis. The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations. Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.

  15. INDIVIDUAL ACCOUNTABILITY IN COOPERATIVE LEARNING: MORE OPPORTUNITIES TO PRODUCE SPOKEN ENGLISH

    Directory of Open Access Journals (Sweden)

    Puji Astuti

    2017-05-01

    Full Text Available The contribution of cooperative learning (CL in promoting second and foreign language learning has been widely acknowledged. Little scholarly attention, however, has been given to revealing how this teaching method works and promotes learners’ improved communicative competence. This qualitative case study explores the important role that individual accountability in CL plays in giving English as a Foreign Language (EFL learners in Indonesia the opportunity to use the target language of English. While individual accountability is a principle of and one of the activities in CL, it is currently under studied, thus little is known about how it enhances EFL learning. This study aims to address this gap by conducting a constructivist grounded theory analysis on participant observation, in-depth interview, and document analysis data drawn from two secondary school EFL teachers, 77 students in the observed classrooms, and four focal students. The analysis shows that through individual accountability in CL, the EFL learners had opportunities to use the target language, which may have contributed to the attainment of communicative competence—the goal of the EFL instruction. More specifically, compared to the use of conventional group work in the observed classrooms, through the activities of individual accountability in CL, i.e., performances and peer interaction, the EFL learners had more opportunities to use spoken English. The present study recommends that teachers, especially those new to CL, follow the preset procedure of selected CL instructional strategies or structures in order to recognize the activities within individual accountability in CL and understand how these activities benefit students.

  16. Early preschool processing abilities predict subsequent reading outcomes in bilingual Spanish-Catalan children with Specific Language Impairment (SLI).

    Science.gov (United States)

    Aguilar-Mediavilla, Eva; Buil-Legaz, Lucía; Pérez-Castelló, Josep A; Rigo-Carratalà, Eduard; Adrover-Roig, Daniel

    2014-01-01

    Children with Specific Language Impairment (SLI) have severe language difficulties without showing hearing impairments, cognitive deficits, neurological damage or socio-emotional deprivation. However, previous studies have shown that children with SLI show some cognitive and literacy problems. Our study analyses the relationship between preschool cognitive and linguistic abilities and the later development of reading abilities in Spanish-Catalan bilingual children with SLI. The sample consisted of 17 bilingual Spanish-Catalan children with SLI and 17 age-matched controls. We tested eight distinct processes related to phonological, attention, and language processing at the age of 6 years and reading at 8 years of age. Results show that bilingual Spanish-Catalan children with SLI show significantly lower scores, as compared to typically developing peers, in phonological awareness, phonological memory, and rapid automatized naming (RAN), together with a lower outcome in tasks measuring sentence repetition and verbal fluency. Regarding attentional processes, bilingual Spanish-Catalan children with SLI obtained lower scores in auditory attention, but not in visual attention. At the age of 8 years Spanish-Catalan children with SLI had lower scores than their age-matched controls in total reading score, letter identification (decoding), and in semantic task (comprehension). Regression analyses identified both phonological awareness and verbal fluency at the age of 6 years to be the best predictors of subsequent reading performance at the age of 8 years. Our data suggest that language acquisition problems and difficulties in reading acquisition in bilingual children with SLI might be related to the close interdependence between a limitation in cognitive processing and a deficit at the linguistic level. After reading this article, readers will be able to: identify their understanding of the relation between language difficulties and reading outcomes; explain how processing

  17. La langue aymara: Des cimes andines a l'intelligence des ordinateurs (The Aymara Language: Andean Peaks for the Intelligence of Computers).

    Science.gov (United States)

    Barbin, Christina

    1987-01-01

    Research suggests that Aymara, an ancient language still spoken in parts of South America, may be well suited for use as a "bridge" language in translation because of its extremely regular and coherent grammar. A machine translation program using the language has already been developed. (MSE)

  18. Second Language Learners' Attitudes towards English Varieties

    Science.gov (United States)

    Zhang, Weimin; Hu, Guiling

    2008-01-01

    This pilot project investigates second language (L2) learners' attitudes towards three varieties of English: American (AmE), British (BrE) and Australian (AuE). A 69-word passage spoken by a female speaker of each variety was used. Participants were 30 Chinese students pursuing Masters or Doctoral degrees in the United States, who listened to each…

  19. The English Language of the Nigeria Police

    Science.gov (United States)

    Chinwe, Udo Victoria

    2015-01-01

    In the present day Nigeria, the quality of the English language spoken by Nigerians, is perceived to have been deteriorating and needs urgent attention. The proliferation of books and articles in the recent years can be seen as the native outcrop of its received attention and recognition as a matter of discourse. Evidently, every profession,…

  20. A cross-language study of the speech sounds in Yorùbá and Malay: Implications for Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Boluwaji Oshodi

    2013-07-01

    Full Text Available Acquiring a language begins with the knowledge of its sounds system which falls under the branch of linguistics known as phonetics. The knowledge of the sound system becomes very important to prospective learners particularly L2 learners whose L1 exhibits different sounds and features from the target L2 because this knowledge is vital in order to internalise the correct pronunciation of words. This study examined and contrasted the sound systems of Yorùbá a Niger-Congo language spoken in Nigeria to that of Malay (Peninsular variety, an Austronesian language spoken in Malaysia with emphasis on the areas of differences. The data for this study were collected from ten participants; five native female Malay speakers who are married to Yorùbá native speakers but live in Malaysia and five Yorùbá native speakers who reside in Nigeria. The findings revealed that speakers from both sides have difficulties with sounds and features in the L2 which are not attested in their L1 and they tended to substitute them for similar ones in their L1 through transfer. This confirms the fact that asymmetry between the sound systems of L1 and L2 is a major source of error in L2 acquisition.