WorldWideScience

Sample records for spoken language experience

  1. Effects of early auditory experience on the spoken language of deaf children at 3 years of age.

    Science.gov (United States)

    Nicholas, Johanna Grant; Geers, Ann E

    2006-06-01

    By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44

  2. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  3. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  4. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    Science.gov (United States)

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  5. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  6. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... languages and can be used for the purposes of spoken language identification. Keywords. SLID .... branch of linguistics to study the sound structure of human language. ... countries, work in the area of Indian language identification has not ...... English and speech database has been collected over tele-.

  7. The employment of a spoken language computer applied to an air traffic control task.

    Science.gov (United States)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  8. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  9. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  10. Attentional Capture of Objects Referred to by Spoken Language

    Science.gov (United States)

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  11. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    Science.gov (United States)

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  12. CROATIAN ADULT SPOKEN LANGUAGE CORPUS (HrAL

    Directory of Open Access Journals (Sweden)

    Jelena Kuvač Kraljević

    2016-01-01

    Full Text Available Interest in spoken-language corpora has increased over the past two decades leading to the development of new corpora and the discovery of new facets of spoken language. These types of corpora represent the most comprehensive data source about the language of ordinary speakers. Such corpora are based on spontaneous, unscripted speech defined by a variety of styles, registers and dialects. The aim of this paper is to present the Croatian Adult Spoken Language Corpus (HrAL, its structure and its possible applications in different linguistic subfields. HrAL was built by sampling spontaneous conversations among 617 speakers from all Croatian counties, and it comprises more than 250,000 tokens and more than 100,000 types. Data were collected during three time slots: from 2010 to 2012, from 2014 to 2015 and during 2016. HrAL is today available within TalkBank, a large database of spoken-language corpora covering different languages (https://talkbank.org, in the Conversational Analyses corpora within the subsection titled Conversational Banks. Data were transcribed, coded and segmented using the transcription format Codes for Human Analysis of Transcripts (CHAT and the Computerised Language Analysis (CLAN suite of programmes within the TalkBank toolkit. Speech streams were segmented into communication units (C-units based on syntactic criteria. Most transcripts were linked to their source audios. The TalkBank is public free, i.e. all data stored in it can be shared by the wider community in accordance with the basic rules of the TalkBank. HrAL provides information about spoken grammar and lexicon, discourse skills, error production and productivity in general. It may be useful for sociolinguistic research and studies of synchronic language changes in Croatian.

  13. Iconicity as a general property of language: evidence from spoken and signed languages

    Directory of Open Access Journals (Sweden)

    Pamela Perniss

    2010-12-01

    Full Text Available Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to hook up to motor and perceptual experience.

  14. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Spoken language outcomes after hemispherectomy: factoring in etiology.

    Science.gov (United States)

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  16. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  17. Inferring Speaker Affect in Spoken Natural Language Communication

    OpenAIRE

    Pon-Barry, Heather Roberta

    2012-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, ...

  18. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  19. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    Science.gov (United States)

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  20. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  1. Native Language Spoken as a Risk Marker for Tooth Decay.

    Science.gov (United States)

    Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M

    2015-01-01

    The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.

  2. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  3. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  4. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  5. Development of Mandarin spoken language after pediatric cochlear implantation.

    Science.gov (United States)

    Li, Bei; Soli, Sigfrid D; Zheng, Yun; Li, Gang; Meng, Zhaoli

    2014-07-01

    The purpose of this study was to evaluate early spoken language development in young Mandarin-speaking children during the first 24 months after cochlear implantation, as measured by receptive and expressive vocabulary growth rates. Growth rates were compared with those of normally hearing children and with growth rates for English-speaking children with cochlear implants. Receptive and expressive vocabularies were measured with the simplified short form (SSF) version of the Mandarin Communicative Development Inventory (MCDI) in a sample of 112 pediatric implant recipients at baseline, 3, 6, 12, and 24 months after implantation. Implant ages ranged from 1 to 5 years. Scores were expressed in terms of normal equivalent ages, allowing normalized vocabulary growth rates to be determined. Scores for English-speaking children were re-expressed in these terms, allowing direct comparisons of Mandarin and English early spoken language development. Vocabulary growth rates during the first 12 months after implantation were similar to those for normally hearing children less than 16 months of age. Comparisons with growth rates for normally hearing children 16-30 months of age showed that the youngest implant age group (1-2 years) had an average growth rate of 0.68 that of normally hearing children; while the middle implant age group (2-3 years) had an average growth rate of 0.65; and the oldest implant age group (>3 years) had an average growth rate of 0.56, significantly less than the other two rates. Growth rates for English-speaking children with cochlear implants were 0.68 in the youngest group, 0.54 in the middle group, and 0.57 in the oldest group. Growth rates in the middle implant age groups for the two languages differed significantly. The SSF version of the MCDI is suitable for assessment of Mandarin language development during the first 24 months after cochlear implantation. Effects of implant age and duration of implantation can be compared directly across

  6. Language and Culture in the Multiethnic Community: Spoken Language Assessment.

    Science.gov (United States)

    Matluck, Joseph H.; Mace-Matluck, Betty J.

    This paper discusses the sociolinguistic problems inherent in multilingual testing, and the accompanying dangers of cultural bias in either the visuals or the language used in a given test. The first section discusses English-speaking Americans' perception of foreign speakers in terms of: (1) physical features; (2) speech, specifically vocabulary,…

  7. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    Science.gov (United States)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  8. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.

    2016-01-01

    BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  9. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2016-01-01

    Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  10. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  11. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  12. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    Science.gov (United States)

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  13. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  14. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A real-time spoken-language system for interactive problem-solving, combining linguistic and statistical technology for improved spoken language understanding

    Science.gov (United States)

    Moore, Robert C.; Cohen, Michael H.

    1993-09-01

    Under this effort, SRI has developed spoken-language technology for interactive problem solving, featuring real-time performance for up to several thousand word vocabularies, high semantic accuracy, habitability within the domain, and robustness to many sources of variability. Although the technology is suitable for many applications, efforts to date have focused on developing an Air Travel Information System (ATIS) prototype application. SRI's ATIS system has been evaluated in four ARPA benchmark evaluations, and has consistently been at or near the top in performance. These achievements are the result of SRI's technical progress in speech recognition, natural-language processing, and speech and natural-language integration.

  16. SPOKEN-LANGUAGE FEATURES IN CASUAL CONVERSATION A Case of EFL Learners‘ Casual Conversation

    Directory of Open Access Journals (Sweden)

    Aris Novi

    2017-12-01

    Full Text Available Spoken text differs from written one in its features of context dependency, turn-taking organization, and dynamic structure. EFL learners; however, sometime find it difficult to produce typical characteristics of spoken language, particularly in casual talk. When they are asked to conduct a conversation, some of them tend to be script-based which is considered unnatural. Using the theory of Thornburry (2005, this paper aims to analyze characteristics of spoken language in casual conversation which cover spontaneity, interactivity, interpersonality, and coherence. This study used discourse analysis to reveal four features in turns and moves of three casual conversations. The findings indicate that not all sub-features used in the conversation. In this case, the spontaneity features were used 132 times; the interactivity features were used 1081 times; the interpersonality features were used 257 times; while the coherence features (negotiation features were used 526 times. Besides, the results also present that some participants seem to dominantly produce some sub-features naturally and vice versa. Therefore, this finding is expected to be beneficial to provide a model of how spoken interaction should be carried out. More importantly, it could raise English teachers or lecturers‘ awareness in teaching features of spoken language, so that, the students could develop their communicative competence as the native speakers of English do.

  17. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  18. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    Science.gov (United States)

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    Science.gov (United States)

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  20. What Comes First, What Comes Next: Information Packaging in Written and Spoken Language

    Directory of Open Access Journals (Sweden)

    Vladislav Smolka

    2017-07-01

    Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.

  1. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    Science.gov (United States)

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  2. Self-Ratings of Spoken Language Dominance: A Multilingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals

    Science.gov (United States)

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2012-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…

  3. THE IMPLEMENTATION OF COMMUNICATIVE LANGUAGE TEACHING (CLT TO TEACH SPOKEN RECOUNTS IN SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Eri Rusnawati

    2016-10-01

    Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa

  4. On-Line Syntax: Thoughts on the Temporality of Spoken Language

    Science.gov (United States)

    Auer, Peter

    2009-01-01

    One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…

  5. Acquisition of graphic communication by a young girl without comprehension of spoken language.

    Science.gov (United States)

    von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R

    To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.

  6. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    Science.gov (United States)

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  7. Personality Structure in the Trait Lexicon of Hindi, a Major Language Spoken in India

    NARCIS (Netherlands)

    Singh, Jitendra K.; Misra, Girishwar; De Raad, Boele

    2013-01-01

    The psycho-lexical approach is extended to Hindi, a major language spoken in India. From both the dictionary and from Hindi novels, a huge set of personality descriptors was put together, ultimately reduced to a manageable set of 295 trait terms. Both self and peer ratings were collected on those

  8. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    Science.gov (United States)

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  9. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  10. Phonotactic spoken language identification with limited training data

    CSIR Research Space (South Africa)

    Peche, M

    2007-08-01

    Full Text Available The authors investigate the addition of a new language, for which limited resources are available, to a phonotactic language identification system. Two classes of approaches are studied: in the first class, only existing phonetic recognizers...

  11. Factors Influencing Verbal Intelligence and Spoken Language in Children with Phenylketonuria.

    Science.gov (United States)

    Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre

    2015-05-01

    To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.

  12. "Now We Have Spoken."

    Science.gov (United States)

    Zimmer, Patricia Moore

    2001-01-01

    Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…

  13. Cochlear implants and spoken language processing abilities: Review and assessment of the literature

    OpenAIRE

    Peterson, Nathaniel R.; Pisoni, David B.; Miyamoto, Richard T.

    2010-01-01

    Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading...

  14. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  15. Retinoic acid signaling: a new piece in the spoken language puzzle

    Directory of Open Access Journals (Sweden)

    Jon-Ruben eVan Rhijn

    2015-11-01

    Full Text Available Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms that encode these pathways will shed light on how humans can effortlessly and innately use spoken language and could elucidate what goes wrong in speech-language disorders.FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that also includes receptive and expressive language impairments. The underlying neuro-molecular mechanisms controlled by FOXP2, which will give insight into our capacity for speech-motor control, are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid signaling and to modify the cellular response to retinoic acid, a key regulator of brain development. Herein we explore the evidence that FOXP2 and retinoic acid signaling function in the same pathways. We present evidence at molecular, cellular and behavioral levels that suggest an interplay between FOXP2 and retinoic acid that may be important for fine motor control and speech-motor output. We propose that retinoic acid signaling is an exciting new angle from which to investigate how neurogenetic mechanisms can contribute to the (spoken language ready brain.

  16. Predictors of spoken language development following pediatric cochlear implantation.

    Science.gov (United States)

    Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid

    2012-01-01

    Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple

  17. ORIGINAL ARTICLES How do doctors learn the spoken language of ...

    African Journals Online (AJOL)

    2009-07-01

    Jul 1, 2009 ... and cultural metaphors of illness as part of language learning. The theory of .... role.21 Even in a military setting, where soldiers learnt Korean or Spanish as part of ... own language – a cross-cultural survey. Brit J Gen Pract ...

  18. Predictors of Spoken Language Development Following Pediatric Cochlear Implantation

    NARCIS (Netherlands)

    Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters

    2012-01-01

    Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to

  19. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    Science.gov (United States)

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  20. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  1. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  2. Does it really matter whether students' contributions are spoken versus typed in an intelligent tutoring system with natural language?

    Science.gov (United States)

    D'Mello, Sidney K; Dowell, Nia; Graesser, Arthur

    2011-03-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.

  3. The interface between spoken and written language: developmental disorders.

    Science.gov (United States)

    Hulme, Charles; Snowling, Margaret J

    2014-01-01

    We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter-sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills).

  4. Loops of Spoken Language i Danish Broadcasting Corporation News

    DEFF Research Database (Denmark)

    le Fevre Jakobsen, Bjarne

    2012-01-01

    The tempo of Danish television news broadcasts has changed markedly over the past 40 years, while the language has essentially always been conservative, and remains so today. The development in the tempo of the broadcasts has gone through a number of phases from a newsreader in a rigid structure...

  5. IMPACT ON THE INDIGENOUS LANGUAGES SPOKEN IN NIGERIA ...

    African Journals Online (AJOL)

    In the face of globalisation, the scale of communication is increasing from being merely .... capital goods and services across national frontiers involving too, political contexts of ... auditory and audiovisual entertainment, the use of English dominates. The language .... manners, entertainment, sports, the legal system, etc.

  6. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A step beyond local observations with a dialog aware bidirectional GRU network for Spoken Language Understanding

    OpenAIRE

    Vukotic , Vedran; Raymond , Christian; Gravier , Guillaume

    2016-01-01

    International audience; Architectures of Recurrent Neural Networks (RNN) recently become a very popular choice for Spoken Language Understanding (SLU) problems; however, they represent a big family of different architectures that can furthermore be combined to form more complex neural networks. In this work, we compare different recurrent networks, such as simple Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Gated Memory Units (GRU) and their bidirectional versions,...

  8. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (Paudiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Identification of four class emotion from Indonesian spoken language using acoustic and lexical features

    Science.gov (United States)

    Kasyidi, Fatan; Puji Lestari, Dessi

    2018-03-01

    One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.

  10. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  11. The Attitudes and Motivation of Children towards Learning Rarely Spoken Foreign Languages: A Case Study from Saudi Arabia

    Science.gov (United States)

    Al-Nofaie, Haifa

    2018-01-01

    This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…

  12. The missing foundation in teacher education: Knowledge of the structure of spoken and written language.

    Science.gov (United States)

    Moats, L C

    1994-01-01

    Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.

  13. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    Science.gov (United States)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  14. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  15. Spoken language development in oral preschool children with permanent childhood deafness.

    Science.gov (United States)

    Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.

  16. Spoken language achieves robustness and evolvability by exploiting degeneracy and neutrality.

    Science.gov (United States)

    Winter, Bodo

    2014-10-01

    As with biological systems, spoken languages are strikingly robust against perturbations. This paper shows that languages achieve robustness in a way that is highly similar to many biological systems. For example, speech sounds are encoded via multiple acoustically diverse, temporally distributed and functionally redundant cues, characteristics that bear similarities to what biologists call "degeneracy". Speech is furthermore adequately characterized by neutrality, with many different tongue configurations leading to similar acoustic outputs, and different acoustic variants understood as the same by recipients. This highlights the presence of a large neutral network of acoustic neighbors for every speech sound. Such neutrality ensures that a steady backdrop of variation can be maintained without impeding communication, assuring that there is "fodder" for subsequent evolution. Thus, studying linguistic robustness is not only important for understanding how linguistic systems maintain their functioning upon the background of noise, but also for understanding the preconditions for language evolution. © 2014 WILEY Periodicals, Inc.

  17. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  18. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    Science.gov (United States)

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  19. The effect of written text on comprehension of spoken English as a foreign language.

    Science.gov (United States)

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.

  20. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  1. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  2. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    Science.gov (United States)

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  3. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    Science.gov (United States)

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  4. The relation of the number of languages spoken to performance in different cognitive abilities in old age.

    Science.gov (United States)

    Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias

    2016-12-01

    Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.

  5. Cochlear implants and spoken language processing abilities: review and assessment of the literature.

    Science.gov (United States)

    Peterson, Nathaniel R; Pisoni, David B; Miyamoto, Richard T

    2010-01-01

    Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading). However, there is wide variation in individual outcomes following cochlear implantation, and some CI recipients never develop useable speech and oral language skills. The causes of this enormous variation in outcomes are only partly understood at the present time. The variables most strongly associated with language outcomes are age at implantation and mode of communication in rehabilitation. Thus, some of the more important factors determining success of cochlear implantation are broadly related to neural plasticity that appears to be transiently present in deaf individuals. In this article we review the expected outcomes of cochlear implantation, potential predictors of those outcomes, the basic science regarding critical and sensitive periods, and several new research directions in the field of cochlear implantation.

  6. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  7. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  8. Rethinking spoken fluency

    OpenAIRE

    McCarthy, Michael

    2009-01-01

    This article re-examines the notion of spoken fluency. Fluent and fluency are terms commonly used in everyday, lay language, and fluency, or lack of it, has social consequences. The article reviews the main approaches to understanding and measuring spoken fluency and suggest that spoken fluency is best understood as an interactive achievement, and offers the metaphor of ‘confluence’ to replace the term fluency. Many measures of spoken fluency are internal and monologue-based, whereas evidence...

  9. Foreign body aspiration and language spoken at home: 10-year review.

    Science.gov (United States)

    Choroomi, S; Curotta, J

    2011-07-01

    To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.

  10. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  11. Verbal short-term memory development and spoken language outcomes in deaf children with cochlear implants.

    Science.gov (United States)

    Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B

    2013-01-01

    Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures

  12. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach

    Science.gov (United States)

    Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546

  13. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    Science.gov (United States)

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  14. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    Science.gov (United States)

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  15. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    Science.gov (United States)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners

  16. THE INFLUENCE OF LANGUAGE USE AND LANGUAGE ATTITUDE ON THE MAINTENANCE OF COMMUNITY LANGUAGES SPOKEN BY MIGRANT STUDENTS

    Directory of Open Access Journals (Sweden)

    Leni Amalia Suek

    2014-05-01

    Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.

  17. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates.

    Science.gov (United States)

    Petkov, Christopher I; Jarvis, Erich D

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.

  18. RECEPTION OF SPOKEN ENGLISH. MISHEARINGS IN THE LANGUAGE OF BUSINESS AND LAW

    Directory of Open Access Journals (Sweden)

    HOREA Ioana-Claudia

    2013-07-01

    Full Text Available Spoken English may sometimes cause us to face a peculiar problem in respect of the reception and the decoding of auditive signals, which might lead to mishearings. Risen from erroneous perception, from a lack in understanding the communication and an involuntary mental replacement of a certain element or structure by a more familiar one, these mistakes are most frequently encountered in the case of listening to songs, where the melodic line can facilitate the development of confusion by its somewhat altered intonation, which produces the so called mondegreens. Still, instances can be met in all domains of verbal communication, as proven in several examples noticed during classes of English as a foreign language (EFL taught to non-philological subjects. Production and perceptions of language depend on a series of elements that influence the encoding and the decoding of the message. These filters belong to both psychological and semantic categories which can either interfere with the accuracy of emission and reception. Poor understanding of a notion or concept combined with a more familiar relation with a similarly sounding one will result in unconsciously picking the structure which is better known. This means ‘hearing’ something else than it had been said, something closer to the receiver’s preoccupations and baggage of knowledge than the original structure or word. Some mishearings become particularly relevant as they concern teaching English for Specific Purposes (ESP. Such are those encountered during classes of Business English or in English for Law. Though not very likely to occur too often, given an intuitively felt inaccuracy - as the terms are known by the users to need to be more specialised -, such examples are still not ignorable. Thus, we consider they deserve a higher degree of attention, as they might become quite relevant in the global context of an increasing work force migration and a spread of multinational companies.

  19. Development of a spoken language identification system for South African languages

    CSIR Research Space (South Africa)

    Peché, M

    2009-12-01

    Full Text Available , and complicates the design of the system as a whole. Current benchmark results are established by the National Institute of Standards and Technology (NIST) Language Recognition Evaluation (LRE) [12]. Initially started in 1996, the next evaluation was in 2003..., Gunnar Evermann, Mark Gales, Thomas Hain, Dan Kershaw, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, Valtcho Valtchev, and Phil Woodland: “The HTK book. Revised for HTK version 3.3”, Online: http://htk.eng.cam.ac.uk/., 2005. [11] M.A. Zissman...

  20. The role of planum temporale in processing accent variation in spoken language comprehension.

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a

  1. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    Science.gov (United States)

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  2. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    Science.gov (United States)

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  3. The influence of orthographic experience on the development of phonological preparation in spoken word production.

    Science.gov (United States)

    Li, Chuchu; Wang, Min

    2017-08-01

    Three sets of experiments using the picture naming tasks with the form preparation paradigm investigated the influence of orthographic experience on the development of phonological preparation unit in spoken word production in native Mandarin-speaking children. Participants included kindergarten children who have not received formal literacy instruction, Grade 1 children who are comparatively more exposed to the alphabetic pinyin system and have very limited Chinese character knowledge, Grades 2 and 4 children who have better character knowledge and more exposure to characters, and skilled adult readers who have the most advanced character knowledge and most exposure to characters. Only Grade 1 children showed the form preparation effect in the same initial consonant condition (i.e., when a list of target words shared the initial consonant). Both Grade 4 children and adults showed the preparation effect when the initial syllable (but not tone) among target words was shared. Kindergartners and Grade 2 children only showed the preparation effect when the initial syllable including tonal information was shared. These developmental changes in phonological preparation could be interpreted as a joint function of the modification of phonological representation and attentional shift. Extensive pinyin experience encourages speakers to attend to and select onset phoneme in phonological preparation, whereas extensive character experience encourages speakers to prepare spoken words in syllables.

  4. The role of planum temporale in processing accent variation in spoken language comprehension

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and

  5. Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension

    Directory of Open Access Journals (Sweden)

    Brian eRiordan

    2015-05-01

    Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.

  6. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    Science.gov (United States)

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  7. Is spoken Danish less intelligible than Swedish?

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent J.; van Bezooijen, Renee; Pacilly, Jos J. A.

    2010-01-01

    The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss circumstantial evidence suggesting that Danish is

  8. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  9. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  10. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    Science.gov (United States)

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  11. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  12. Quarterly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits for fiscal...

  13. Quarterly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from fiscal...

  14. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    Science.gov (United States)

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Task-Oriented Spoken Dialog System for Second-Language Learning

    Science.gov (United States)

    Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…

  16. Web-based mini-games for language learning that support spoken interaction

    CSIR Research Space (South Africa)

    Strik, H

    2015-09-01

    Full Text Available The European ‘Lifelong Learning Programme’ (LLP) project ‘Games Online for Basic Language learning’ (GOBL) aimed to provide youths and adults wishing to improve their basic language skills access to materials for the development of communicative...

  17. Propositional Density in Spoken and Written Language of Czech-Speaking Patients with Mild Cognitive Impairment

    Science.gov (United States)

    Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán

    2016-01-01

    Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…

  18. Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

    Science.gov (United States)

    Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R

    This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant

  19. Bilateral Versus Unilateral Cochlear Implants in Children: A Study of Spoken Language Outcomes

    Science.gov (United States)

    Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of

  20. Bilateral versus unilateral cochlear implants in children: a study of spoken language outcomes.

    Science.gov (United States)

    Sarant, Julia; Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children's intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of screen time, and more time spent

  1. Yearly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2016 Onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from federal...

  2. Yearly Data for Spoken Language Preferences of Supplemental Security Income (Blind & Disabled) (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...

  3. Yearly Data for Spoken Language Preferences of Supplemental Security Income Aged Applicants (2011-Onward)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits from federal fiscal year 2011...

  4. Yearly Data for Spoken Language Preferences of Social Security Disability Insurance Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...

  5. Yearly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...

  6. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Aged Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits for fiscal years 2014 -...

  7. Quarterly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for fiscal years 2014...

  8. Yearly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2016 Onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...

  9. Case report: acquisition of three spoken languages by a child with a cochlear implant.

    Science.gov (United States)

    Francis, Alexander L; Ho, Diana Wai Lam

    2003-03-01

    There have been only two reports of multilingual cochlear implant users to date, and both of these were postlingually deafened adults. Here we report the case of a 6-year-old early-deafened child who is acquiring Cantonese, English and Mandarin in Hong Kong. He and two age-matched peers with similar educational backgrounds were tested using common, standardized tests of vocabulary and expressive and receptive language skills (Peabody Picture Vocabulary Test (Revised) and Reynell Developmental Language Scales version II). Results show that this child is acquiring Cantonese, English and Mandarin to a degree comparable to two classmates with normal hearing and similar educational and social backgrounds.

  10. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    Science.gov (United States)

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  11. Cross-Sensory Correspondences and Symbolism in Spoken and Written Language

    Science.gov (United States)

    Walker, Peter

    2016-01-01

    Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…

  12. Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study.

    Science.gov (United States)

    Werfel, Krystal L

    2017-10-05

    The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

  13. Language experience changes subsequent learning

    Science.gov (United States)

    Onnis, Luca; Thiessen, Erik

    2013-01-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. PMID:23200510

  14. Human inferior colliculus activity relates to individual differences in spoken language learning.

    Science.gov (United States)

    Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M

    2012-03-01

    A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.

  15. SNOT-22: psychometric properties and cross-cultural adaptation into the Portuguese language spoken in Brazil.

    Science.gov (United States)

    Caminha, Guilherme Pilla; Melo Junior, José Tavares de; Hopkins, Claire; Pizzichini, Emilio; Pizzichini, Marcia Margaret Menezes

    2012-12-01

    Rhinosinusitis is a highly prevalent disease and a major cause of high medical costs. It has been proven to have an impact on the quality of life through generic health-related quality of life assessments. However, generic instruments may not be able to factor in the effects of interventions and treatments. SNOT-22 is a major disease-specific instrument to assess quality of life for patients with rhinosinusitis. Nevertheless, there is still no validated SNOT-22 version in our country. Cross-cultural adaptation of the SNOT-22 into Brazilian Portuguese and assessment of its psychometric properties. The Brazilian version of the SNOT-22 was developed according to international guidelines and was broken down into nine stages: 1) Preparation 2) Translation 3) Reconciliation 4) Back-translation 5) Comparison 6) Evaluation by the author of the SNOT-22 7) Revision by committee of experts 8) Cognitive debriefing 9) Final version. Second phase: prospective study consisting of a verification of the psychometric properties, by analyzing internal consistency and test-retest reliability. Cultural adaptation showed adequate understanding, acceptability and psychometric properties. We followed the recommended steps for the cultural adaptation of the SNOT-22 into Portuguese language, producing a tool for the assessment of patients with sinonasal disorders of clinical importance and for scientific studies.

  16. Word level language identification in online multilingual communication

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Dogruoz, A. Seza

    2013-01-01

    Multilingual speakers switch between languages in online and spoken communication. Analyses of large scale multilingual data require automatic language identification at the word level. For our experiments with multilingual online discussions, we first tag the language of individual words using

  17. Phonological processing of rhyme in spoken language and location in sign language by deaf and hearing participants: a neurophysiological study.

    Science.gov (United States)

    Colin, C; Zuinen, T; Bayard, C; Leybaert, J

    2013-06-01

    Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  18. Who can communicate with whom? Language experience affects infants' evaluation of others as monolingual or multilingual.

    Science.gov (United States)

    Pitts, Casey E; Onishi, Kristine H; Vouloumanos, Athena

    2015-01-01

    Adults recognize that people can understand more than one language. However, it is unclear whether infants assume other people understand one or multiple languages. We examined whether monolingual and bilingual 20-month-olds expect an unfamiliar person to understand one or more than one language. Two speakers told a listener the location of a hidden object using either the same or two different languages. When different languages were spoken, monolinguals looked longer when the listener searched correctly, bilinguals did not; when the same language was spoken, both groups looked longer for incorrect searches. Infants rely on their prior language experience when evaluating the language abilities of a novel individual. Monolingual infants assume others can understand only one language, although not necessarily the infants' own; bilinguals do not. Infants' assumptions about which community of conventions people belong to may allow them to recognize effective communicative partners and thus opportunities to acquire language, knowledge, and culture. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  1. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  2. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    Science.gov (United States)

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  3. Spoken English in the classroom - A Study of attitudes and experiences of spoken varieties of English in English teaching in Norway

    OpenAIRE

    Hopland, Amalie Alsaker

    2016-01-01

    To be able to express oneself orally is one of the five basic skills that is considered necessary for learning and development in school, work, and in society, and therefore deserves attention. The English language has developed from mainly being a language belonging to the native speakers to becoming a world language. Today, speakers of English use the language mainly to communicate internationally. With this development, the native speaker norm has been questioned by many academics. This is...

  4. How Does the Linguistic Distance Between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances During Verbal Memory Examination.

    Science.gov (United States)

    Taha, Haitham

    2017-06-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and phonologically similar version (PS). The result showed that for immediate free-recall, the performances were better for the SL and the PS conditions compared to the SA one. However, for the parts of delayed recall and recognition, the results did not reveal any significant consistent effect of diglossia. Accordingly, it was suggested that diglossia has a significant effect on the storage and short term memory functions but not on long term memory functions. The results were discussed in light of different approaches in the field of bilingual memory.

  5. Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture.

    Science.gov (United States)

    Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne

    2015-09-15

    Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.

  6. Evaluating Attributions of Delay and Confusion in Young Bilinguals: Special Insights from Infants Acquiring a Signed and a Spoken Language.

    Science.gov (United States)

    Petitto, Laura Ann; Holowka, Siobhan

    2002-01-01

    Examines whether early simultaneous bilingual language exposure causes children to be language delayed or confused. Cites research suggesting normal and parallel linguistic development occurs in each language in young children and young children's dual language developments are similar to monolingual language acquisition. Research on simultaneous…

  7. Language and the pain experience.

    Science.gov (United States)

    Wilson, Dianne; Williams, Marie; Butler, David

    2009-03-01

    People in persistent pain have been reported to pay increased attention to specific words or descriptors of pain. The amount of attention paid to pain or cues for pain (such as pain descriptors), has been shown to be a major factor in the modulation of persistent pain. This relationship suggests the possibility that language may have a role both in understanding and managing the persistent pain experience. The aim of this paper is to describe current models of neuromatrices for pain and language, consider the role of attention in persistent pain states and highlight discrepancies, in previous studies based on the McGill Pain Questionnaire (MPQ), of the role of attention on pain descriptors. The existence of a pain neuromatrix originally proposed by Melzack (1990) has been supported by emerging technologies. Similar technologies have recently allowed identification of multiple areas of involvement for the processing of auditory input and the construction of language. As with the construction of pain, this neuromatrix for speech and language may intersect with neural systems for broader cognitive functions such as attention, memory and emotion. A systematic search was undertaken to identify experimental or review studies, which specifically investigated the role of attention on pain descriptors (as cues for pain) in persistent pain patients. A total of 99 articles were retrieved from six databases, with 66 articles meeting the inclusion criteria. After duplicated articles were eliminated, the remaining 41 articles were reviewed in order to support a link between persistent pain, pain descriptors and attention. This review revealed a diverse range of specific pain descriptors, the majority of which were derived from the MPQ. Increased attention to pain descriptors was consistently reported to be associated with emotional state as well as being a significant factor in maintaining persistent pain. However, attempts to investigate the attentional bias of specific pain

  8. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  9. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  10. Deaf children’s non-verbal working memory is impacted by their language experience

    Directory of Open Access Journals (Sweden)

    Chloe eMarshall

    2015-05-01

    Full Text Available Recent studies suggest that deaf children perform more poorly on working memory tasks compared to hearing children, but do not say whether this poorer performance arises directly from deafness itself or from deaf children’s reduced language exposure. The issue remains unresolved because findings come from (1 tasks that are verbal as opposed to non-verbal, and (2 involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities. A more direct test of how the type and quality of language exposure impacts working memory is to use measures of non-verbal working memory (NVWM and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n=27, deaf native users of British Sign Language (BSL; n=7, and deaf children non native signers (n=19. We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and if language tasks predicted scores on NVWM tasks. For the two NVWM tasks, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another. Multiple regression analysis revealed that the vocabulary measure predicted scores on NVWM tasks. Our results suggest that whatever the language modality – spoken or signed – rich language experience from birth, and the good language skills that result from this early age of aacquisition, play a critical role in the development of NVWM and in performance on NVWM

  11. How appropriate are the English language test requirements for non-UK-trained nurses? A qualitative study of spoken communication in UK hospitals.

    Science.gov (United States)

    Sedgwick, Carole; Garner, Mark

    2017-06-01

    Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical

  12. Language configurations of degree-related denotations in the spoken production of a group of Colombian EFL university students: A corpus-based study

    Directory of Open Access Journals (Sweden)

    Wilder Yesid Escobar

    2015-05-01

    Full Text Available Recognizing that developing the competences needed to appropriately use linguistic resources according to contextual characteristics (pragmatics is as important as the cultural-imbedded linguistic knowledge itself (semantics and that both are equally essential to form competent speakers of English in foreign language contexts, we feel this research relies on corpus linguistics to analyze both the scope and the limitations of the sociolinguistic knowledge and the communicative skills of English students at the university level. To such end, a linguistic corpus was assembled, compared to an existing corpus of native speakers, and analyzed in terms of the frequency, overuse, underuse, misuse, ambiguity, success, and failure of the linguistic parameters used in speech acts. The findings herein describe the linguistic configurations employed to modify levels and degrees of descriptions (salient sematic theme exhibited in the EFL learners´ corpus appealing to the sociolinguistic principles governing meaning making and language use which are constructed under the social conditions of the environments where the language is naturally spoken for sociocultural exchange.

  13. The challenge of linguistic and cultural diversity: Does length of experience affect South African speech-language therapists’ management of children with language impairment?

    Directory of Open Access Journals (Sweden)

    Frenette Southwood

    2015-02-01

    Aims: To investigate whether length of clinical experience influenced: number of bilingual children treated, languages spoken by these children, languages in which assessment and remediation can be offered, assessment instrument(s favoured, and languages in which therapy material is required. Method: From questionnaires completed by 243 Health Professions Council of South Africa (HPCSA-registered SLTs who treat children with language problems, two groups were drawn:71 more experienced (ME respondents (20+ years of experience and 79 less experienced (LE respondents (maximum 5 years of experience. Results: The groups did not differ significantly with regard to (1 number of children(monolingual or bilingual with language difficulties seen, (2 number of respondents seeing child clients who have Afrikaans or an African language as home language, (3 number of respondents who can offer intervention in Afrikaans or English and (4 number of respondents who reported needing therapy material in Afrikaans or English. However, significantly more ME than LE respondents reported seeing first language child speakers of English, whereas significantly more LE than ME respondents could provide services, and required therapymaterial, in African languages. Conclusion: More LE than ME SLTs could offer remediation in an African language, but there were few other significant differences between the two groups. There is still an absence of appropriate assessment and remediation material for Afrikaans and African languages, but the increased number of African language speakers entering the profession may contribute to better service delivery to the diverse South African population.

  14. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  15. Approaches for Language Identification in Mismatched Environments

    Science.gov (United States)

    2016-09-08

    domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...the process of identifying the language in a spoken speech utterance. In recent years, great improvements in LID system performance have been seen...be the case in practice. Lastly, we conduct an out-of-set experiment where VoA data from 9 other languages (Amharic, Creole, Croatian, English

  16. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits for fiscal...

  17. Social Security Administration - Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from fiscal...

  18. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  19. Language and verbal reasoning skills in adolescents with 10 or more years of cochlear implant experience.

    Science.gov (United States)

    Geers, Ann E; Sedey, Allison L

    2011-02-01

    of sign to enhance language skills during the elementary years does not appear to have a negative impact on later language skills, students who continue to rely on sign to improve their vocabulary comprehension into high school typically exhibit poorer English language outcomes than students whose spoken language comprehension parallels or exceeds their comprehension of speech + sign. Overall, the language results obtained from these teenagers with more than 10 yrs of CI experience reflect substantial improvement over the verbal skills exhibited by adolescents with similar levels of hearing loss before the advent of CIs. These optimistic results were observed in teenagers who were among the first in the United States and Canada to receive a CI. We anticipate that the use of improved technology that is being initiated at even younger ages should lead to age-appropriate language levels in an even larger proportion of children with CIs.

  20. Two functions of early language experience.

    Science.gov (United States)

    Arshavsky, Yuri I

    2009-05-01

    The unique human ability of linguistic communication, defined as the ability to produce a practically infinite number of meaningful messages using a finite number of lexical items, is determined by an array of "linguistic" genes, which are expressed in neurons forming domain-specific linguistic centers in the brain. In this review, I discuss the idea that infants' early language experience performs two complementary functions. In addition to allowing infants to assimilate the words and grammar rules of their mother language, early language experience initiates genetic programs underlying language production and comprehension. This hypothesis explains many puzzling characteristics of language acquisition, such as the existence of a critical period for acquiring the first language and the absence of a critical period for the acquisition of additional language(s), a similar timetable for language acquisition in children belonging to families of different social and cultural status, the strikingly similar timetables in the acquisition of oral and sign languages, and the surprisingly small correlation between individuals' final linguistic competence and the intensity of their training. Based on the studies of microcephalic individuals, I argue that genetic factors determine not only the number of neurons and organization of interneural connections within linguistic centers, but also the putative internal properties of neurons that are not limited to their electrophysiological and synaptic properties.

  1. A Pilot Study of Telepractice for Teaching Listening and Spoken Language to Mandarin-Speaking Children with Congenital Hearing Loss

    Science.gov (United States)

    Chen, Pei-Hua; Liu, Ting-Wei

    2017-01-01

    Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…

  2. Spoken language and everyday functioning in 5-year-old children using hearing aids or cochlear implants.

    Science.gov (United States)

    Cupples, Linda; Ching, Teresa Yc; Button, Laura; Seeto, Mark; Zhang, Vicky; Whitfield, Jessica; Gunnourie, Miriam; Martin, Louise; Marnane, Vivienne

    2017-09-12

    This study investigated the factors influencing 5-year language, speech and everyday functioning of children with congenital hearing loss. Standardised tests including PLS-4, PPVT-4 and DEAP were directly administered to children. Parent reports on language (CDI) and everyday functioning (PEACH) were collected. Regression analyses were conducted to examine the influence of a range of demographic variables on outcomes. Participants were 339 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children's average receptive and expressive language scores were approximately 1 SD below the mean of typically developing children, and scores on speech production and everyday functioning were more than 1 SD below. Regression models accounted for 70-23% of variance in scores across different tests. Earlier CI switch-on and higher non-verbal ability were associated with better outcomes in most domains. Earlier HA fitting and use of oral communication were associated with better outcomes on directly administered language assessments. Severity of hearing loss and maternal education influenced outcomes of children with HAs. The presence of additional disabilities affected outcomes of children with CIs. The findings provide strong evidence for the benefits of early HA fitting and early CI for improving children's outcomes.

  3. When novel sentences spoken or heard for the first time in the history of the universe are not enough: toward a dual-process model of language.

    Science.gov (United States)

    Van Lancker Sidtis, Diana

    2004-01-01

    Although interest in the language sciences was previously focused on newly created sentences, more recently much attention has turned to the importance of formulaic expressions in normal and disordered communication. Also referred to as formulaic expressions and made up of speech formulas, idioms, expletives, serial and memorized speech, slang, sayings, clichés, and conventional expressions, non-propositional language forms a large proportion of every speaker's competence, and may be differentially disturbed in neurological disorders. This review aims to examine non-propositional speech with respect to linguistic descriptions, psycholinguistic experiments, sociolinguistic studies, child language development, clinical language disorders, and neurological studies. Evidence from numerous sources reveals differentiated and specialized roles for novel and formulaic verbal functions, and suggests that generation of novel sentences and management of prefabricated expressions represent two legitimate and separable processes in language behaviour. A preliminary model of language behaviour that encompasses unitary and compositional properties and their integration in everyday language use is proposed. Integration and synchronizing of two disparate processes in language behaviour, formulaic and novel, characterizes normal communicative function and contributes to creativity in language. This dichotomy is supported by studies arising from other disciplines in neurology and psychology. Further studies are necessary to determine in what ways the various categories of formulaic expressions are related, and how these categories are processed by the brain. Better understanding of how non-propositional categories of speech are stored and processed in the brain can lead to better informed treatment strategies in language disorders.

  4. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  5. The challenge of linguistic and cultural diversity: Does length of experience affect South African speech-language therapists' management of children with language impairment?

    Science.gov (United States)

    Southwood, Frenette; Van Dulm, Ondene

    2015-02-10

    South African speech-language therapists (SLTs) currently do not reflect the country's linguistic and cultural diversity. The question arises as to who might be better equipped currently to provide services to multilingual populations: SLTs with more clinical experience in such contexts, or recently trained SLTs who are themselves linguistically and culturally diverse and whose training programmes deliberately focused on multilingualism and multiculturalism? To investigate whether length of clinical experience influenced: number of bilingual children treated, languages spoken by these children, languages in which assessment and remediation can be offered, assessment instrument(s) favoured, and languages in which therapy material is required. From questionnaires completed by 243 Health Professions Council of South Africa (HPCSA)-registered SLTs who treat children with language problems, two groups were drawn:71 more experienced (ME) respondents (20+ years of experience) and 79 less experienced (LE) respondents (maximum 5 years of experience). The groups did not differ significantly with regard to (1) number of children(monolingual or bilingual) with language difficulties seen, (2) number of respondents seeing child clients who have Afrikaans or an African language as home language, (3) number of respondents who can offer intervention in Afrikaans or English and (4) number of respondents who reported needing therapy material in Afrikaans or English. However, significantly more ME than LE respondents reported seeing first language child speakers of English, whereas significantly more LE than ME respondents could provide services, and required therapy material, in African languages. More LE than ME SLTs could offer remediation in an African language, but there were few other significant differences between the two groups. There is still an absence of appropriate assessment and remediation material for Afrikaans and African languages, but the increased number of African

  6. The challenge of linguistic and cultural diversity: Does length of experience affect South African speech-language therapists’ management of children with language impairment?

    Science.gov (United States)

    Southwood, Frenette; van Dulm, Ondene

    2015-01-01

    Background South African speech-language therapists (SLTs) currently do not reflect the country's linguistic and cultural diversity. The question arises as to who might be better equipped currently to provide services to multilingual populations: SLTs with more clinical experience in such contexts, or recently trained SLTs who are themselves linguistically and culturally diverse and whose training programmes deliberately focused on multilingualism and multiculturalism? Aims To investigate whether length of clinical experience influenced: number of bilingual children treated, languages spoken by these children, languages in which assessment and remediation can be offered, assessment instrument(s) favoured, and languages in which therapy material is required. Method From questionnaires completed by 243 Health Professions Council of South Africa (HPCSA)-registered SLTs who treat children with language problems, two groups were drawn: 71 more experienced (ME) respondents (20+ years of experience) and 79 less experienced (LE) respondents (maximum 5 years of experience). Results The groups did not differ significantly with regard to (1) number of children (monolingual or bilingual) with language difficulties seen, (2) number of respondents seeing child clients who have Afrikaans or an African language as home language, (3) number of respondents who can offer intervention in Afrikaans or English and (4) number of respondents who reported needing therapy material in Afrikaans or English. However, significantly more ME than LE respondents reported seeing first language child speakers of English, whereas significantly more LE than ME respondents could provide services, and required therapy material, in African languages. Conclusion More LE than ME SLTs could offer remediation in an African language, but there were few other significant differences between the two groups. There is still an absence of appropriate assessment and remediation material for Afrikaans and African

  7. Generating Inferences from Written and Spoken Language: A Comparison of Children with Visual Impairment and Children with Sight

    Science.gov (United States)

    Edmonds, Caroline J.; Pring, Linda

    2006-01-01

    The two experiments reported here investigated the ability of sighted children and children with visual impairment to comprehend text and, in particular, to draw inferences both while reading and while listening. Children were assigned into "comprehension skill" groups, depending on the degree to which their reading comprehension skill was in line…

  8. Mobile Assisted Language Learning Experiences

    Science.gov (United States)

    Kim, Daesang; Ruecker, Daniel; Kim, Dong-Joong

    2017-01-01

    The purpose of this study was to investigate the benefits of learning with mobile technology for TESOL students and to explore their perceptions of learning with this type of technology. The study provided valuable insights on how students perceive and adapt to learning with mobile technology for effective learning experiences for both students…

  9. Individual language experience modulates rapid formation of cortical memory circuits for novel words

    Science.gov (United States)

    Kimppa, Lilli; Kujala, Teija; Shtyrov, Yury

    2016-01-01

    Mastering multiple languages is an increasingly important ability in the modern world; furthermore, multilingualism may affect human learning abilities. Here, we test how the brain’s capacity to rapidly form new representations for spoken words is affected by prior individual experience in non-native language acquisition. Formation of new word memory traces is reflected in a neurophysiological response increase during a short exposure to novel lexicon. Therefore, we recorded changes in electrophysiological responses to phonologically native and non-native novel word-forms during a perceptual learning session, in which novel stimuli were repetitively presented to healthy adults in either ignore or attend conditions. We found that larger number of previously acquired languages and earlier average age of acquisition (AoA) predicted greater response increase to novel non-native word-forms. This suggests that early and extensive language experience is associated with greater neural flexibility for acquiring novel words with unfamiliar phonology. Conversely, later AoA was associated with a stronger response increase for phonologically native novel word-forms, indicating better tuning of neural linguistic circuits to native phonology. The results suggest that individual language experience has a strong effect on the neural mechanisms of word learning, and that it interacts with the phonological familiarity of the novel lexicon. PMID:27444206

  10. LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI

    Directory of Open Access Journals (Sweden)

    Süleyman Kaan YALÇIN (M.A.H.

    2008-12-01

    Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.

  11. Java Decaffeinated: experiences building a programming language from components

    OpenAIRE

    Farragher, Linda; Dobson, Simon

    2000-01-01

    non-peer-reviewed Most modern programming languages are complex and feature rich. Whilst this is (sometimes) an advantage for industrial-strength applications, it complicates both language teaching and language research. We describe our experiences in the design of a reduced sub-set of the Java language and its implementation using the Vanilla language development framework. We argue that Vanilla???s component-based approach allows the language???s feature set to be varied quickly and simp...

  12. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  13. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    Science.gov (United States)

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  14. Use of Automated Scoring in Spoken Language Assessments for Test Takers with Speech Impairments. Research Report. ETS RR-17-42

    Science.gov (United States)

    Loukina, Anastassia; Buzick, Heather

    2017-01-01

    This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…

  15. Spoken Grammar for Chinese Learners

    Institute of Scientific and Technical Information of China (English)

    徐晓敏

    2013-01-01

    Currently, the concept of spoken grammar has been mentioned among Chinese teachers. However, teach-ers in China still have a vague idea of spoken grammar. Therefore this dissertation examines what spoken grammar is and argues that native speakers’ model of spoken grammar needs to be highlighted in the classroom teaching.

  16. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  17. Rapid modulation of spoken word recognition by visual primes.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  18. "Poetry Is Not a Special Club": How Has an Introduction to the Secondary Discourse of Spoken Word Made Poetry a Memorable Learning Experience for Young People?

    Science.gov (United States)

    Dymoke, Sue

    2017-01-01

    This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…

  19. Color Naming Experiment in Mongolian Language

    Directory of Open Access Journals (Sweden)

    Nandin-Erdene Osorjamaa

    2015-11-01

    Full Text Available There are numerous researches on color terms and names in many languages. In Mongolian language there are few doctoral theses on color naming. Cross cultural studies of color naming have demonstrated Semantic relevance in French and Mongolian color name Gerlee Sh. (2000; Comparisons of color naming across English and Mongolian Uranchimeg B. (2004; Semantic comparison between Russian and Mongolian idioms Enhdelger O. (1996; across symbolism Dulam S. (2007 and few others. Also a few articles on color naming by some Mongolian scholars are Tsevel, Ya. (1947, Baldan, L. (1979, Bazarragchaa, M. (1997 and others. Color naming studies are not sufficiently studied in Modern Mongolian. Our research is considered to be the first intended research on color naming in Modern Mongolian, because it is one part of Ph.D dissertation on color naming. There are two color naming categories in Mongolian, basic color terms and non- basic color terms. There are seven basic color terms in Mongolian. This paper aims to consider how Mongolian color names are derived from basic colors by using psycholinguistics associative experiment. It maintains the students and researchers to acquire the specific understanding of the differences and similarities of color naming in Mongolian and  English languages from the psycho-linguistic aspect.

  20. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    Science.gov (United States)

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  1. Dyslexia and Learning a Foreign Language: A Personal Experience.

    Science.gov (United States)

    Simon, Charlann S.

    2000-01-01

    This participant observer report reviews research on how dyslexia complicates learning a second language, a description of how dyslexia has affected educational experiences, personal experiences learning a foreign language, and recommendations to individuals with dyslexia who are faced with fulfilling a foreign language requirement and their…

  2. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  3. Learning a Tonal Language by Attending to the Tone: An In Vivo Experiment

    NARCIS (Netherlands)

    Liu, Y.; Wang, M.; Perfetti, C.A.; Brubaker, B.; Wu, S.M.; MacWhinney, B.

    2011-01-01

    Learning the Chinese tone system is a major challenge to students of Chinese as a second or foreign language. Part of the problem is that the spoken Chinese syllable presents a complex perceptual input that overlaps tone with segments. This complexity can be addressed through directing attention to

  4. The road to language learning is iconic: evidence from British Sign Language.

    Science.gov (United States)

    Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella

    2012-12-01

    An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.

  5. Experiences with Autonomy: Learners' Voices on Language Learning

    Science.gov (United States)

    Kristmanson, Paula; Lafargue, Chantal; Culligan, Karla

    2013-01-01

    This article focuses on the experiences of Grade 12 students using a language portfolio based on the principles and guidelines of the European Language Portfolio (ELP) in their second language classes in a large urban high school. As part of a larger action-research project, focus group interviews were conducted to gather data related to…

  6. Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J. P.

    2018-01-01

    Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…

  7. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available Computer Science 81 ( 2016 ) 128 – 135 5th Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...

  8. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  9. Experience-based probabilities modulate expectations in a gender-coded artificial language

    Directory of Open Access Journals (Sweden)

    Anton Öttl

    2016-08-01

    Full Text Available The current study combines artificial language learning with visual world eyetracking to investigate acquisition of representations associating spoken words and visual referents using morphologically complex pseudowords. Pseudowords were constructed to consistently encode referential gender by means of suffixation for a set of imaginary figures that could be either male or female. During training, the frequency of exposure to pseudowords and their imaginary figure referents were manipulated such that a given word and its referent would be more likely to occur in either the masculine form or the feminine form, or both forms would be equally likely. Results show that these experience-based probabilities affect the formation of new representations to the extent that participants were faster at recognizing a referent whose gender was consistent with the induced expectation than a referent whose gender was inconsistent with this expectation. Disambiguating gender information available from the suffix did not mask the induced expectations. Eyetracking data provide additional evidence that such expectations surface during online lexical processing. Taken together, these findings indicate that experience-based information is accessible during the earliest stages of processing, and are consistent with the view that language comprehension depends on the activation of perceptual memory traces.

  10. Cross-Cultural Differences in Beliefs and Practices that Affect the Language Spoken to Children: Mothers with Indian and Western Heritage

    Science.gov (United States)

    Simmons, Noreen; Johnston, Judith

    2007-01-01

    Background: Speech-language pathologists often advise families about interaction patterns that will facilitate language learning. This advice is typically based on research with North American families of European heritage and may not be culturally suited for non-Western families. Aims: The goal of the project was to identify differences in the…

  11. THE ‘UNFORGETTABLE’ EXPERIENCE OF FOREIGN LANGUAGE ANXIETY

    Directory of Open Access Journals (Sweden)

    Morana Drakulić

    2015-09-01

    Full Text Available Foreign language anxiety (FLA has long been recognized as a factor that hinders the process of foreign language learning at all levels. Among numerous FLA sources identified in the literature, language classroom seems to be of particular interest and significance, especially in the formal language learning context, where the course and the teacher are often the only representatives of language. The main purpose of the study is to determine the presence and potential sources of foreign language anxiety among first year university students and to explore how high anxiety levels shape and affect students’ foreign language learning experience. In the study both the questionnaire and the interviews were used as the data collection methods. Thematic analysis of the interviews and descriptive statistics suggest that most anxiety-provoking situations stem from the language classroom itself.

  12. The Link between Form and Meaning in American Sign Language: Lexical Processing Effects

    Science.gov (United States)

    Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella

    2009-01-01

    Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of…

  13. Sign language: an international handbook

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Woll, B.

    2012-01-01

    Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of

  14. The effect of differential listening experience on the development of expressive and receptive language in children with bilateral cochlear implants.

    Science.gov (United States)

    Hess, Christi; Zettler-Greeley, Cynthia; Godar, Shelly P; Ellis-Weismer, Susan; Litovsky, Ruth Y

    2014-01-01

    Growing evidence suggests that children who are deaf and use cochlear implants (CIs) can communicate effectively using spoken language. Research has reported that age of implantation and length of experience with the CI play an important role in a predicting a child's linguistic development. In recent years, the increase in the number of children receiving bilateral CIs (BiCIs) has led to interest in new variables that may also influence the development of hearing, speech, and language abilities, such as length of bilateral listening experience and the length of time between the implantation of the two CIs. One goal of the present study was to determine how a cohort of children with BiCIs performed on standardized measures of language and nonverbal cognition. This study examined the relationship between performance on language and nonverbal intelligence quotient (IQ) tests and the ages at implantation of the first CI and second CI. This study also examined whether early bilateral activation is related to better language scores. Children with BiCIs (n = 39; ages 4 to 9 years) were tested on two standardized measures, the Test of Language Development and the Leiter International Performance Scale-Revised, to evaluate their expressive/receptive language skills and nonverbal IQ/memory. Hierarchical regression analyses were used to evaluate whether BiCI hearing experience predicts language performance. While large intersubject variability existed, on average, almost all the children with BiCIs scored within or above normal limits on measures of nonverbal cognition. Expressive and receptive language scores were highly variable, less likely to be above the normative mean, and did not correlate with Length of first CI Use, defined as length of auditory experience with one cochlear implant, or Length of second CI Use, defined as length of auditory experience with two cochlear implants. All children in the present study had BiCIs. Most IQ scores were either at or above that

  15. Learning a Minoritized Language in a Majority Language Context: Student Agency and the Creation of Micro-Immersion Contexts

    Science.gov (United States)

    DePalma, Renée

    2015-01-01

    This study investigates the self-reported experiences of students participating in a Galician language and culture course. Galician, a language historically spoken in northwestern Spain, has been losing ground with respect to Spanish, particularly in urban areas and among the younger generations. The research specifically focuses on informal…

  16. Cross-cultural differences in beliefs and practices that affect the language spoken to children: mothers with Indian and Western heritage.

    Science.gov (United States)

    Simmons, Noreen; Johnston, Judith

    2007-01-01

    Speech-language pathologists often advise families about interaction patterns that will facilitate language learning. This advice is typically based on research with North American families of European heritage and may not be culturally suited for non-Western families. The goal of the project was to identify differences in the beliefs and practices of Indian and Euro-Canadian mothers that would affect patterns of talk to children. A total of 47 Indian mothers and 51 Euro-Canadian mothers of preschool age children completed a written survey concerning child-rearing practices and beliefs, especially those about talk to children. Discriminant analyses indicated clear cross-cultural differences and produced functions that could predict group membership with a 96% accuracy rate. Items contributing most to these functions concerned the importance of family, perceptions of language learning, children's use of language in family and society, and interactions surrounding text. Speech-language pathologists who wish to adapt their services for families of Indian heritage should remember the centrality of the family, the likelihood that there will be less emphasis on early independence and achievement, and the preference for direct instruction.

  17. Students' Evaluation of Their English Language Learning Experience

    Science.gov (United States)

    Maizatulliza, M.; Kiely, R.

    2017-01-01

    In the field of English language teaching and learning, there is a long history of investigating students' performance while they are undergoing specific learning programmes. This research study, however, focused on students' evaluation of their English language learning experience after they have completed their programme. The data were gathered…

  18. Hotel Employees' Japanese Language Experiences: Implications and Suggestions.

    Science.gov (United States)

    Makita-Discekici, Yasuko

    1998-01-01

    Analyzes the Japanese language learning experiences of 13 hotel employees in Guam. Results of the study present implications and suggestions for a Japanese language program for the hotel industry. The project began as a result of hotel employees frustrations when they were unable to communicate effectively with their Japanese guests. (Auth/JL)

  19. Subtitles and language learning principles, strategies and practical experiences

    CERN Document Server

    Mariotti, Cristina; Caimi, Annamaria

    2014-01-01

    The articles collected in this publication combine diachronic and synchronic research with the description of updated teaching experiences showing the educational role of subtitled audiovisuals in various foreign language learning settings.

  20. Usage of the Python programming language in the CMS experiment

    International Nuclear Information System (INIS)

    Wilkinson, R; Hegner, B; Jones, C D

    2010-01-01

    Being a highly dynamic language and allowing reliable programming with quick turnarounds, Python is a widely used programming language in CMS. Most of the tools used in workflow management and the GRID interface tools are written in this language. Also most of the tools used in the context of release management: integration builds, release building and deploying, as well as performance measurements are in Python. With an interface to the CMS data formats, rapid prototyping of analyses and debugging is an additional use case. Finally in 2008 the CMS experiment switched to using Python as its configuration language. This paper will give an overview of the general usage of Python in the CMS experiment and discuss which features of the language make it well-suited for the existing use cases.

  1. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  2. Competition dynamics of second-language listening

    NARCIS (Netherlands)

    Broersma, M.; Cutler, A.

    2011-01-01

    Spoken-word recognition in a nonnative language is particularly difficult where it depends on discrimination between confusable phonemes. Four experiments here examine whether this difficulty is in part due to phantom competition from onear-wordso in speech. Dutch listeners confuse English /ae/ and

  3. Understanding foreign language teachers’ practical knowledge: What’s the role of prior language learning experience?

    Directory of Open Access Journals (Sweden)

    Sibel Arıoğul

    2007-04-01

    Full Text Available Teachers’ practical knowledge is considered as teachers’ general knowledge, beliefsand thinking (Borg, 2003 which can be traced in teachers’ practices (Connelly & Clandinin,1988 and shaped by various background sources (Borg, 2003; Grossman, 1990; Meijer,Verloop, and Beijard, 1999. This paper initially discusses how language teachers areinfluenced by three background sources: teachers’ prior language learning experiences, priorteaching experience, and professional coursework in pre- and in-service education. Bydrawing its data from the author’s longitidunal study, it also presents the findings of a crosscasetheme emerged from the investigation of three English as a foreign language (EFLteachers’ prior language learning experiences. The paper also discusses how the participationin studies on teachers’ knowledge raises teachers’ own awareness while it informs theresearch.

  4. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  5. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  6. “You never know who are Sami or speak Sami” Clinicians’ experiences with language-appropriate care to Sami-speaking patients in outpatient mental health clinics in Northern Norway

    Directory of Open Access Journals (Sweden)

    Inger Dagsvold

    2016-11-01

    Full Text Available Background: The Indigenous population in Norway, the Sami, have a statutory right to speak and be spoken to in the Sami language when receiving health services. There is, however, limited knowledge about how clinicians deal with this in clinical practice. This study explores how clinicians deal with language-appropriate care with Sami-speaking patients in specialist mental health services. Objectives: This study aims to explore how clinicians identify and respond to Sami patients’ language data, as well as how they experience provision of therapy to Sami-speaking patients in outpatient mental health clinics in Sami language administrative districts. Method: Data were collected using qualitative method, through individual interviews with 20 therapists working in outpatient mental health clinics serving Sami populations in northern Norway. A thematic analysis inspired by systematic text reduction was employed. Findings: Two themes were identified: (a identification of Sami patients’ language data and (b experiences with provision of therapy to Sami-speaking patients. Conclusion: Findings indicate that clinicians are not aware of patients’ language needs prior to admission and that they deal with identification of language data and offer of language-appropriate care ad hoc when patients arrive. Sami-speaking participants reported always offering language choice and found more profound understanding of patients’ experiences when Sami language was used. Whatever language Sami-speaking patients may choose, they are found to switch between languages during therapy. Most non-Sami-speaking participants reported offering Sami-speaking services, but the patients chose to speak Norwegian. However, a few of the participants maintained language awareness and could identify language needs despite a patient's refusal to speak Sami in therapy. Finally, some non-Sami-speaking participants were satisfied if they understood what the patients were saying

  7. "You never know who are Sami or speak Sami" Clinicians' experiences with language-appropriate care to Sami-speaking patients in outpatient mental health clinics in Northern Norway.

    Science.gov (United States)

    Dagsvold, Inger; Møllersen, Snefrid; Stordahl, Vigdis

    2016-01-01

    The Indigenous population in Norway, the Sami, have a statutory right to speak and be spoken to in the Sami language when receiving health services. There is, however, limited knowledge about how clinicians deal with this in clinical practice. This study explores how clinicians deal with language-appropriate care with Sami-speaking patients in specialist mental health services. This study aims to explore how clinicians identify and respond to Sami patients' language data, as well as how they experience provision of therapy to Sami-speaking patients in outpatient mental health clinics in Sami language administrative districts. Data were collected using qualitative method, through individual interviews with 20 therapists working in outpatient mental health clinics serving Sami populations in northern Norway. A thematic analysis inspired by systematic text reduction was employed. Two themes were identified: (a) identification of Sami patients' language data and (b) experiences with provision of therapy to Sami-speaking patients. Findings indicate that clinicians are not aware of patients' language needs prior to admission and that they deal with identification of language data and offer of language-appropriate care ad hoc when patients arrive. Sami-speaking participants reported always offering language choice and found more profound understanding of patients' experiences when Sami language was used. Whatever language Sami-speaking patients may choose, they are found to switch between languages during therapy. Most non-Sami-speaking participants reported offering Sami-speaking services, but the patients chose to speak Norwegian. However, a few of the participants maintained language awareness and could identify language needs despite a patient's refusal to speak Sami in therapy. Finally, some non-Sami-speaking participants were satisfied if they understood what the patients were saying. They left it to patients to address language problems, only to discover patients

  8. Spoken grammar awareness raising: Does it affect the listening ability of Iranian EFL learners?

    Directory of Open Access Journals (Sweden)

    Mojgan Rashtchi

    2011-12-01

    Full Text Available Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and lack core spoken language features. The aim of the present study was to explore the question whether awareness of spoken grammar features could affect learners’ comprehension of real-life conversations. To this end, 45 university students in two intact classes participated in a listening course employing corpus-based materials. The instruction of the spoken grammar features to the experimental group was done overtly through awareness raising tasks, whereas the control group, though exposed to the same materials, was not provided with such tasks for learning the features. The results of the independent samples t tests revealed that the learners in the experimental group comprehended everyday conversations much better than those in the control group. Additionally, the highly positive views of spoken grammar held by the learners, which was elicited by means of a retrospective questionnaire, were generally comparable to those reported in the literature.

  9. Lexicon Optimization for Dutch Speech Recognition in Spoken Document Retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  10. Lexicon optimization for Dutch speech recognition in spoken document retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.; Dalsgaard, P.; Lindberg, B.; Benner, H.

    2001-01-01

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  11. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  12. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  13. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.J.; Swerts, M.G.J.; Theune, M.; Weegels, M.F.

    2001-01-01

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  14. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  15. Language proficiency and the international postgraduate student experience

    OpenAIRE

    Weaver, M

    2016-01-01

    In an increasingly competitive environment, with reduced government funding, full fee-paying international students are an important source of revenue for higher education institutions (HEIs). Although many previous studies have focused on the role of English language proficiency on academic success, there is little known about the extent to which levels of English language proficiency affect these non-native English speaking students’ overall course experience. There have been a wealth of st...

  16. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  17. Transnational Experience, Aspiration and Family Language Policy

    Science.gov (United States)

    Hua, Zhu; Wei, Li

    2016-01-01

    Transnational and multilingual families have become commonplace in the twenty-first century. Yet relatively few attempts have been made from applied and socio-linguistic perspectives to understand what is going on "within" such families; how their transnational and multilingual experiences impact on the family dynamics and their everyday…

  18. The Pakistan Experiment and the Language Issue

    NARCIS (Netherlands)

    van Schendel, W.; Guhathakurta, M.; van Schendel, W.

    2013-01-01

    The partition of 1947 created two new independent states, India and Pakistan. The eastern part of Bengal joined Pakistan. Pakistan was a highly ambitious experiment in twentieth-century state making. And yet, from the beginning the state was beset with enormous challenges. This excerpt from a recent

  19. The Interstitial Language and Transnational Experience

    Directory of Open Access Journals (Sweden)

    Paolo Bartoloni

    2013-08-01

    Full Text Available In this essay I argue that the idea of inhabiting, and of human individuality as the house of being, are fruitful ideas if located in a space defined by movement, porosity, interstitiality, and in an urban and architectural paradigm which is based on openness and inclusiveness. Transnational experiences and localities can be, to this end, extremely instructive. It is essential to articulate the notion of dwelling within an urban context in which building is the result of complex cultural and social interactions, which are characterised not only by the negotiation of space and materials but also, and more importantly, by a range of symbolic values. The symbolism that I refer to here is the product of mnemonic and emotional experiences marked by time and space, which in the case of the migratory and transnational experiences is arrived at through a delicate negotiation of the past and the present, and the ‘here’ (the current locality and the ‘there’ (the native locality. The dwelling that I speak of is, therefore, a double dwelling divided between the present at-hand and the remembered past, and as such it inhabits a space, which is both interstitial and liminal, simultaneously in and out-of-place. I have chosen the Italian Forum in Sydney as a working sample of the place-out-of-place

  20. Language experience enhances early cortical pitch-dependent responses

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T.; Ananthakrishnan, Saradha; Vijayaraghavan, Venkatakrishnan

    2014-01-01

    Pitch processing at cortical and subcortical stages of processing is shaped by language experience. We recently demonstrated that specific components of the cortical pitch response (CPR) index the more rapidly-changing portions of the high rising Tone 2 of Mandarin Chinese, in addition to marking pitch onset and sound offset. In this study, we examine how language experience (Mandarin vs. English) shapes the processing of different temporal attributes of pitch reflected in the CPR components using stimuli representative of within-category variants of Tone 2. Results showed that the magnitude of CPR components (Na-Pb and Pb-Nb) and the correlation between these two components and pitch acceleration were stronger for the Chinese listeners compared to English listeners for stimuli that fell within the range of Tone 2 citation forms. Discriminant function analysis revealed that the Na-Pb component was more than twice as important as Pb-Nb in grouping listeners by language affiliation. In addition, a stronger stimulus-dependent, rightward asymmetry was observed for the Chinese group at the temporal, but not frontal, electrode sites. This finding may reflect selective recruitment of experience-dependent, pitch-specific mechanisms in right auditory cortex to extract more complex, time-varying pitch patterns. Taken together, these findings suggest that long-term language experience shapes early sensory level processing of pitch in the auditory cortex, and that the sensitivity of the CPR may vary depending on the relative linguistic importance of specific temporal attributes of dynamic pitch. PMID:25506127

  1. Birth Order and the Language Experience of Bilingual Children.

    Science.gov (United States)

    Shin, Sarah J.

    2002-01-01

    Investigated the language experience of second-generation immigrant Korean American school-age children (4-18 years) by surveying their parents. Reports responses to a small portion of the questionnaire that specifically addressed the issue of birth order. (Author/VWL)

  2. Language Experience Affects Grouping of Musical Instrument Sounds

    Science.gov (United States)

    Bhatara, Anjali; Boll-Avetisyan, Natalie; Agus, Trevor; Höhle, Barbara; Nazzi, Thierry

    2016-01-01

    Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of…

  3. Faith, language and experience: An analysis of the feeling of ...

    African Journals Online (AJOL)

    This article deals with the essence of religion proposed by Schleiermacher, namely 'the feeling of absolute dependence upon the Infinite'. In his theory of religious experience, and the language he used to express it, he claimed his work to be independent of concepts and beliefs. Epistemologically this is incompatible.

  4. User-Centred Design for Chinese-Oriented Spoken English Learning System

    Science.gov (United States)

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  5. Doodling the Nerves: Surfacing Language Anxiety Experiences in an English Language Classroom

    Science.gov (United States)

    Siagto-Wakat, Geraldine

    2017-01-01

    This qualitative study explored the use of doodling to surface experiences in the psychological phenomenon of language anxiety in an English classroom. It treated the doodles of 192 freshmen from a premier university in Northern Luzon, Philippines. Further, it made use of phenomenological reduction in analysing the data gathered. Findings reveal…

  6. Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery

    Directory of Open Access Journals (Sweden)

    Andrea Hudáková

    2017-11-01

    Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.

  7. The Use of New Technologies for the Teaching of the Igbo Language in Schools: Challenges and Prospects

    Science.gov (United States)

    Iloene, Modesta I.; Iloene, George O.; Mbah, Evelyn E.; Mbah, Boniface M.

    2013-01-01

    This paper examines the experience of teachers in the use of new technologies to teach the Igbo language spoken in South East Nigeria. The study investigates the extent to which new technologies are available and accessible to Igbo teachers, the competence of the Igbo language teachers in the new technologies and the challenges they face that…

  8. Interference of spoken word recognition through phonological priming from visual objects and printed words

    NARCIS (Netherlands)

    McQueen, J.M.; Hüttig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase

  9. Developing a corpus of spoken language variability

    Science.gov (United States)

    Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford

    2003-10-01

    We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.

  10. Language experience narratives and the role of autobiographical reasoning in becoming an urban science teacher

    Science.gov (United States)

    Rivera Maulucci, Maria S.

    2011-06-01

    One of the central challenges globalization and immigration present to education is how to construct school language policies, procedures, and curricula to support academic success of immigrant youth. This case-study compares and contrasts language experience narratives along Elena's developmental trajectory of becoming an urban science teacher. Elena reflects upon her early language experiences and her more recent experiences as a preservice science teacher in elementary dual language classrooms. The findings from Elena's early schooling experiences provide an analysis of the linkages between Elena's developing English proficiency, her Spanish proficiency, and her autobiographical reasoning. Elena's experiences as a preservice teacher in two elementary dual language classrooms indicates ways in which those experiences helped to reframe her views about the intersections between language learning and science learning. I propose the language experience narrative, as a subset of the life story, as a way to understand how preservice teachers reconstruct past language experiences, connect to the present, and anticipate future language practices.

  11. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  12. Linguistic Identity Positioning in Facebook Posts during Second Language Study Abroad: One Teen's Language Use, Experience, and Awareness

    Science.gov (United States)

    Dressler, Roswita; Dressler, Anja

    2016-01-01

    Teens who post on the popular social networking site Facebook in their home environment often continue to do so on second language study abroad sojourns. These sojourners use Facebook to document and make sense of their experiences in the host culture and position themselves with respect to language(s) and culture(s). This study examined one…

  13. APPLICATION OF THE EUROPEAN EXPERIENCE IN FOREIGN LANGUAGE TEACHERS’ TRAINING

    Directory of Open Access Journals (Sweden)

    Victoria Barkasi

    2016-12-01

    Full Text Available The article defines the role of the European experience in the foreign language teachers` training in the modern society, the use of International relations in education. The concept of common European education is analyzed. Due to this concept teaching and learning standards, educational models, and teaching objectives are brought together with the aim to create the common all-European educational system. In order to join this all-European scheme Ukraine needs to make modifications in its educational system. The fundamental idea is to use blended learning as the dominant instructional mode in higher education. The authors examine how the study of the leading European powers` educational experience helps to approach the problems of education in Ukraine critically. English Language Department of Mykolaiv V. Sukhomlynsky National University as a part of the consortium, composed of ten higher education institutions, takes part in the TEMPUS-project «Improving teaching European languages through the introduction of on-line technology (blended learning to train teachers." Blended learning is a powerful technology to be implemented into the modern model of Ukrainian education in order to get the level of European educational system. The article highlights how participation in the implementation of TEMPUS-project can be an effective tool for improving the training of the foreign languages teachers.

  14. Clinical educators' experiences of facilitating learning when speaking a different language from both the student and client.

    Science.gov (United States)

    Keeton, Nicola; Kathard, Harsha; Singh, Shajila

    2017-11-02

    Worldwide there is an increasing responsibility for clinical educators to help students from different language backgrounds to develop the necessary skills to provide health care services to a linguistically diverse client base. This study describes the experiences of clinical educators who facilitate learning in contexts where they are not familiar with the language spoken between students and their clients. A part of the qualitative component of a larger mixed methods study is the focus of this paper. Semi-structured interviews were conducted with eight participants recruited from all audiology university programmes in South Africa. Thematic analysis allowed for an in depth exploration of the research question. Member checking was used to enhance credibility. It is hoped that the findings will inform training programmes and in so doing, optimize the learning of diverse students who may better be able to provide appropriate services to the linguistically diverse population they serve. Participants experienced challenges with fair assessment of students and with ensuring appropriate client care when they were unable to speak the language shared between the client and the student. In the absence of formal guidelines, clinical educators developed unique coping strategies that they used on a case-by-case basis to assess students and ensure adequate client management when they experienced such language barriers while supervising. Coping strategies included engaging other students as interpreters, having students role-play parts of a session in English in advance and requesting real-time translations from the student during the session. They expressed concern about the fairness and efficacy of the coping strategies used. While clinical educators use unique strategies to assess students and to ensure suitable client care, dilemmas remain regarding the fairness of assessment and the ability to ensure the quality of client care.

  15. Language Attitudes, Language Learning Experiences and Individual Strategies What Does School Offer and What Does It Lack?

    Directory of Open Access Journals (Sweden)

    Tódor Erika-Mária

    2016-12-01

    Full Text Available Language learners’ attitudes towards the language and its speakers greatly influence the language learning process and the learning outcomes. Previous research and studies on attitudes and motivation in language learning (Csizér 2007, Dörnyei 2009 show that attitudes and motivation are strongly intertwined. Positive attitude towards the language and its speakers can lead to increased motivation, which then results in better learning achievement and a positive attitude towards learning the language. The aim of the present study was to get a better insight into what regards the language attitudes of students attending Hungarian minority schools in Romania. The interest of the study lies in students’ attitudes towards the different languages, the factors/criteria along which they express their language attitudes, students’ learning experiences and strategies that they consider efficient and useful in order to acquire a language. Results suggest that students’ attitudes are determined by their own experiences of language use, and in this sense we can differentiate between a language for identification – built upon specific emotional, affective, and cognitive factors – and language for communication.

  16. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  17. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  18. Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.

    Science.gov (United States)

    Potter, Christine E; Wang, Tianlin; Saffran, Jenny R

    2017-04-01

    Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.

  19. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  20. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  1. The influence of talker and foreign-accent variability on spoken word identification.

    Science.gov (United States)

    Bent, Tessa; Holt, Rachael Frush

    2013-03-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.

  2. Interference of spoken word recognition through phonological priming from visual objects and printed words

    OpenAIRE

    McQueen, J.; Huettig, F.

    2014-01-01

    Three cross-modal priming experiments examined the influence of pre-exposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g...

  3. Experience with a Spanish-language laparoscopy website.

    Science.gov (United States)

    Moreno-Sanz, Carlos; Seoane-González, Jose B

    2006-02-01

    Although there are no clearly defined electronic tools for continuing medical education (CME), new information technologies offer a basic platform for presenting training content on the internet. Due to the shortage of websites about minimally invasive surgery in the Spanish language, we set up a topical website in Spanish. This study considers the experience with the website between April 2001 and January 2005. To study the activity of the website, the registry information was analyzed descriptively using the log files of the server. To study the characteristics of the users, we searched the database of registered users. We found a total of 107,941 visits to our website and a total of 624,895 page downloads. Most visits to the site were made from Spanish-speaking countries. The most frequent professional profile of the registered users was that of general surgeon. The development, implementation, and evaluation of Spanish-language CME initiatives over the internet is promising but presents challenges.

  4. An Experiment on Creating Enterprise Specific BPM Languages and Tools

    DEFF Research Database (Denmark)

    Brahe, Steen

    Many enterprises use their own domain concepts in modeling business process and use technology in specialized ways when they implement them in a Business Process Management (BPM) system.In contrast, BPM tools used for modeling and implementing business processes often provide a standard modeling...... and automation to BPM tools through a tool experiment in Danske Bank, a large financial institute; We develop business process modeling languages, tools and transformations that capture Danske Banks specific modeling concepts and use of technology, and which automate the generation of code. An empirical...... language, a standard implementation technology and a fixed transformation that may generate the implementation from the model. This makes the tools inflexible and difficult to use.This paper presents another approach. It applies the basic model driven development principles of direct representation...

  5. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment.

    Science.gov (United States)

    Gong, Tao; Lam, Yau W; Shuai, Lan

    2016-01-01

    Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages.

  6. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment

    Science.gov (United States)

    Gong, Tao; Lam, Yau W.; Shuai, Lan

    2016-01-01

    Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages. PMID:28066281

  7. Lexical and Grammatical Abilities in Deaf Italian Preschoolers: The Role of Duration of Formal Language Experience

    Science.gov (United States)

    Rinaldi, Pasquale; Caselli, Cristina

    2009-01-01

    We evaluated language development in deaf Italian preschoolers with hearing parents, taking into account the duration of formal language experience (i.e., the time elapsed since wearing a hearing aid and beginning language education) and different methods of language education. Twenty deaf children were matched with 20 hearing children for age and…

  8. Serbian heritage language schools in the Netherlands through the eyes of the parents

    NARCIS (Netherlands)

    Palmen, Andrej

    It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the

  9. Minority Language Education in Malaysia: Four Ethnic Communities' Experiences.

    Science.gov (United States)

    Smith, Karla J.

    2003-01-01

    Discusses minority language education in Malaysia, a multilingual and multicultural country. Looks at four language minority groups and what they have done to to provide beginning education programs for their children that use the children's native languages. (Author/VWL)

  10. The Impact of Orthographic Consistency on German Spoken Word Identification

    Science.gov (United States)

    Beyermann, Sandra; Penke, Martina

    2014-01-01

    An auditory lexical decision experiment was conducted to find out whether sound-to-spelling consistency has an impact on German spoken word processing, and whether such an impact is different at different stages of reading development. Four groups of readers (school children in the second, third and fifth grades, and university students)…

  11. Exchange students' motivations and language learning success

    DEFF Research Database (Denmark)

    Caudery, Tim; Petersen, Margrethe; Shaw, Philip

    One point investigated in our research project on the linguistic experiences of exchange students in Denmark and Sweden is the reasons students have for coming on exchange. Traditionally, an important goal of student exchange was to acquire improved language skills usually in the language spoken...... in the host country. To what extent is this true when students plan to study in English in a non-English speaking country? Do they hope and expect to improve their English skills, their knowledge of the local language, both, or neither? to what extent are these expectations fulfilled? Results form the project...

  12. Latina/os in Rhetoric and Composition: Learning from Their Experiences with Language Diversity

    Science.gov (United States)

    Cavazos, Alyssa Guadalupe

    2012-01-01

    "Latina/os in Rhetoric and Composition: Learning from their Experiences with Language Diversity" explores how Latina/o academics' experiences with language difference contributes to their Latina/o academic identity and success in academe while remaining connected to their heritage language and cultural background. Using qualitative…

  13. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    Science.gov (United States)

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research

  14. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Science.gov (United States)

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from

  15. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  16. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  17. Syntactic priming in American Sign Language.

    Science.gov (United States)

    Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I

    2015-01-01

    Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.

  18. Syntactic priming in American Sign Language.

    Directory of Open Access Journals (Sweden)

    Matthew L Hall

    Full Text Available Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL. Experiment 1 shows that second language (L2 signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect. Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming operates similarly in sign and speech.

  19. From spoken narratives to domain knowledge: mining linguistic data for medical image understanding.

    Science.gov (United States)

    Guo, Xuan; Yu, Qi; Alm, Cecilia Ovesdotter; Calvelli, Cara; Pelz, Jeff B; Shi, Pengcheng; Haake, Anne R

    2014-10-01

    Extracting useful visual clues from medical images allowing accurate diagnoses requires physicians' domain knowledge acquired through years of systematic study and clinical training. This is especially true in the dermatology domain, a medical specialty that requires physicians to have image inspection experience. Automating or at least aiding such efforts requires understanding physicians' reasoning processes and their use of domain knowledge. Mining physicians' references to medical concepts in narratives during image-based diagnosis of a disease is an interesting research topic that can help reveal experts' reasoning processes. It can also be a useful resource to assist with design of information technologies for image use and for image case-based medical education systems. We collected data for analyzing physicians' diagnostic reasoning processes by conducting an experiment that recorded their spoken descriptions during inspection of dermatology images. In this paper we focus on the benefit of physicians' spoken descriptions and provide a general workflow for mining medical domain knowledge based on linguistic data from these narratives. The challenge of a medical image case can influence the accuracy of the diagnosis as well as how physicians pursue the diagnostic process. Accordingly, we define two lexical metrics for physicians' narratives--lexical consensus score and top N relatedness score--and evaluate their usefulness by assessing the diagnostic challenge levels of corresponding medical images. We also report on clustering medical images based on anchor concepts obtained from physicians' medical term usage. These analyses are based on physicians' spoken narratives that have been preprocessed by incorporating the Unified Medical Language System for detecting medical concepts. The image rankings based on lexical consensus score and on top 1 relatedness score are well correlated with those based on challenge levels (Spearman correlation>0.5 and Kendall

  20. High/Scope Preschool Key Experiences: Language and Literacy. [with]Curriculum Videotape.

    Science.gov (United States)

    Brinkman, Nancy A.

    During the preschool years, children experience great strides in their ability to use language. This booklet and companion videotape help teachers and parents recognize and support six High/Scope key experiences in language and literacy: (1) talking with others about personally meaningful experiences; (2) describing objects, events, and relations;…

  1. New Dimensions in Language Training: The Dartmouth College Experiment.

    Science.gov (United States)

    Rassias, John A.

    The expanded foreign study and foreign language programs offered at Dartmouth are examined with emphasis on the influence of Peace Corps language programs during the last half-dozen years on American college campuses. The impact of the programs at Dartmouth since 1964 is discussed in terms of: (1) a brief history of language instruction at…

  2. Peer Mentoring Second Language Teachers: A Mutually Beneficial Experience?

    Science.gov (United States)

    Kissau, Scott P.; King, Elena Tosky

    2015-01-01

    Studies have shown that there are not enough qualified foreign language and English as a second language teachers in this country. To increase the number of new second language teachers who remain in the profession, and to promote their use of best teaching practices, the ACTFL has identified mentoring as a national research priority. The…

  3. Language Core Values in a Multicultural Setting: An Australian Experience.

    Science.gov (United States)

    Smolicz, Jerzy J.

    1991-01-01

    Reviews European Community and Australian language policies. Considers cultural-economic interface in Australia with respect to current interest in teaching Asian languages for trade purposes. Discusses Australia's growing acceptance of languages other than English and its affect on Aboriginal people. Urges the better utilization of the country's…

  4. Learners' Perceptions of the Use of Mobile Technology in a Task-Based Language Teaching Experience

    Science.gov (United States)

    Calabrich, Simone L.

    2016-01-01

    This research explored perceptions of learners studying English in private language schools regarding the use of mobile technology to support language learning. Learners were first exposed to both a mobile assisted and a mobile unassisted language learning experience, and then asked to express their thoughts on the incorporation of mobile devices…

  5. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    Science.gov (United States)

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980

  6. Bilingualism alters brain functional connectivity between "control" regions and "language" regions: Evidence from bimodal bilinguals.

    Science.gov (United States)

    Li, Le; Abutalebi, Jubin; Zou, Lijuan; Yan, Xin; Liu, Lanfang; Feng, Xiaoxia; Wang, Ruiming; Guo, Taomei; Ding, Guosheng

    2015-05-01

    Previous neuroimaging studies have revealed that bilingualism induces both structural and functional neuroplasticity in the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (LCN), both of which are associated with cognitive control. Since these "control" regions should work together with other language regions during language processing, we hypothesized that bilingualism may also alter the functional interaction between the dACC/LCN and language regions. Here we tested this hypothesis by exploring the functional connectivity (FC) in bimodal bilinguals and monolinguals using functional MRI when they either performed a picture naming task with spoken language or were in resting state. We found that for bimodal bilinguals who use spoken and sign languages, the FC of the dACC with regions involved in spoken language (e.g. the left superior temporal gyrus) was stronger in performing the task, but weaker in the resting state as compared to monolinguals. For the LCN, its intrinsic FC with sign language regions including the left inferior temporo-occipital part and right inferior and superior parietal lobules was increased in the bilinguals. These results demonstrate that bilingual experience may alter the brain functional interaction between "control" regions and "language" regions. For different control regions, the FC alters in different ways. The findings also deepen our understanding of the functional roles of the dACC and LCN in language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. The Impact of Language Experience on Language and Reading: A Statistical Learning Approach

    Science.gov (United States)

    Seidenberg, Mark S.; MacDonald, Maryellen C.

    2018-01-01

    This article reviews the important role of statistical learning for language and reading development. Although statistical learning--the unconscious encoding of patterns in language input--has become widely known as a force in infants' early interpretation of speech, the role of this kind of learning for language and reading comprehension in…

  8. PROPOSING A LANGUAGE EXPERIENCE AND SELF-ASSESSMENT OF PROFICIENCY QUESTIONNAIRE FOR BILINGUAL BRAZILIAN SIGN LANGUAGE/PORTUGUESE HEARING TEACHERS

    Directory of Open Access Journals (Sweden)

    Ingrid FINGER

    2014-12-01

    Full Text Available This article presents a language experience and self-assessment of proficiency questionnaire for hearing teachers who use Brazilian Sign Language and Portuguese in their teaching practice. By focusing on hearing teachers who work in Deaf education contexts, this questionnaire is presented as a tool that may complement the assessment of linguistic skills of hearing teachers. This proposal takes into account important factors in bilingualism studies such as the importance of knowing the participant’s context with respect to family, professional and social background (KAUFMANN, 2010. This work uses as model the following questionnaires: LEAP-Q (MARIAN; BLUMENFELD; KAUSHANSKAYA, 2007, SLSCO – Sign Language Skills Classroom Observation (REEVES et al., 2000 and the Language Attitude Questionnaire (KAUFMANN, 2010, taking into consideration the different kinds of exposure to Brazilian Sign Language. The questionnaire is designed for bilingual bimodal hearing teachers who work in bilingual schools for the Deaf or who work in the specialized educational department who assistdeaf students.

  9. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  10. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Science.gov (United States)

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633

  11. Digital Language Death

    Science.gov (United States)

    Kornai, András

    2013-01-01

    Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559

  12. Digital language death.

    Directory of Open Access Journals (Sweden)

    András Kornai

    Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.

  13. Language Development in Children with Language Disorders: An Introduction to Skinner's Verbal Behavior and the Techniques for Initial Language Acquisition

    Science.gov (United States)

    Casey, Laura Baylot; Bicard, David F.

    2009-01-01

    Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…

  14. Immigrants' language skills: the immigrant experience in a longitudinal survey

    OpenAIRE

    Barry CHISWICK; Yew LEE; Paul W. MILLER

    2003-01-01

    This paper is concerned with the determinants of English language proficiency among immigrants. It presents a model based on economic incentives, exposure, and efficiency in language acquisition, which it tests using the Longitudinal Survey of Immigrants to Australia. Probit and bivariate probit analyses are employed. The hypotheses are supported by the data. The bivariate probit analysis across waves indicates a "regression to the mean" in the unobserved components of English language profic...

  15. The language learning experiences of students with dyslexia: lessons from an interview study.

    OpenAIRE

    Kormos, Judit; Csizér, Kata; Sarkadi, Ágnes

    2009-01-01

    Our interview study investigated what experiences Hungarian students with dyslexia have in the language learning group and concerning the general behavior, the instructional methods and assessment techniques of their language teachers. Long qualitative interviews were conducted with 15 students of different ages who studied foreign languages in a variety of educational settings. Our results indicate that the participants generally had negative experiences when studying in groups, especially i...

  16. Multiclausal Utterances Aren't Just for Big Kids: A Framework for Analysis of Complex Syntax Production in Spoken Language of Preschool- and Early School-Age Children

    Science.gov (United States)

    Arndt, Karen Barako; Schuele, C. Melanie

    2013-01-01

    Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…

  17. Open Source Software Development with Your Mother Language : Intercultural Collaboration Experiment 2002

    DEFF Research Database (Denmark)

    Nomura, Saeko; Ishida, Saeko; Jensen, Mika Yasuoka

    2002-01-01

    ”Open Source Software Development with Your Mother Language: Intercultural Collaboration Experiment 2002,” 10th International Conference on Human – Computer Interaction (HCII2003), June 2003, Crete, Greece.......”Open Source Software Development with Your Mother Language: Intercultural Collaboration Experiment 2002,” 10th International Conference on Human – Computer Interaction (HCII2003), June 2003, Crete, Greece....

  18. The influence of bodily experience on children's language processing.

    Science.gov (United States)

    Wellsby, Michele; Pexman, Penny M

    2014-07-01

    The Body-Object Interaction (BOI) variable measures how easily a human body can physically interact with a word's referent (Siakaluk, Pexman, Aguilera, Owen, & Sears, ). A facilitory BOI effect has been observed with adults in language tasks, with faster and more accurate responses for high BOI words (e.g., mask) than for low BOI words (e.g., ship; Wellsby, Siakaluk, Owen, & Pexman, ). We examined the development of this effect in children. Fifty children (aged 6-9 years) and a group of 21 adults completed a word naming task with high and low BOI words. Younger children (aged 6-7 years) did not show a BOI effect, but older children (aged 8-9 years) showed a significant facilitory BOI effect, as did adults. Magnitude of children's BOI effect was related to age as well as reading skills. These results suggest that bodily experience (as measured by the BOI variable) begins to influence visual word recognition behavior by about 8 years of age. Copyright © 2014 Cognitive Science Society, Inc.

  19. Learner Perceptions and Experiences of Pride in Second Language Education

    Science.gov (United States)

    Ross, Andrew S.; Stracke, Elke

    2016-01-01

    Within applied linguistics, understanding of motivation and cognition has benefitted from substantial attention for decades, but the attention received by language learner emotions has not been comparable until recently when interest in emotions and the role they can play in language learning has increased. Emotions are at the core of human…

  20. Multiple Schools, Languages, Experiences and Affiliations: Ideological Becomings and Positionings

    Science.gov (United States)

    Maguire, Mary H.; Curdt-Christiansen, Xiao Lan

    2007-01-01

    This article focuses on the identity accounts of a group of Chinese children who attend a heritage language school. Bakhtin's concepts of ideological becoming, and authoritative and internally persuasive discourse, frame our exploration. Taking a dialogic view of language and learning raises questions about schools as socializing spaces and…

  1. Language Planning and Student Experiences: Intention, Rhetoric and Implementation

    Science.gov (United States)

    Lo Bianco, Joseph; Aliani, Renata

    2013-01-01

    This book is a timely comparison of the divergent worlds of policy implementation and policy ambition, the messy, often contradictory here-and-now reality of languages in schools and the sharp-edged, shiny, future-oriented representation of languages in policy. Two deep rooted tendencies in Australian political and social life, multiculturalism…

  2. Modern Approaches to Foreign Language Teaching: World Experience

    Science.gov (United States)

    Shumskyi, Oleksandr

    2016-01-01

    The problem of applying communicative approach to foreign language teaching of students in non-language departments of higher education institutions in a number of countries has been analyzed in the paper. The brief overview of main historic milestones in the development of communicative approach has been presented. It has been found out that…

  3. Monitoring the Performance of Human and Automated Scores for Spoken Responses

    Science.gov (United States)

    Wang, Zhen; Zechner, Klaus; Sun, Yu

    2018-01-01

    As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…

  4. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  5. Between Syntax and Pragmatics: The Causal Conjunction Protože in Spoken and Written Czech

    Czech Academy of Sciences Publication Activity Database

    Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra

    -, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf

  6. Visual Sonority Modulates Infants' Attraction to Sign Language

    Science.gov (United States)

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  7. Bimodal Bilingual Language Development of Hearing Children of Deaf Parents

    Science.gov (United States)

    Hofmann, Kristin; Chilla, Solveig

    2015-01-01

    Adopting a bimodal bilingual language acquisition model, this qualitative case study is the first in Germany to investigate the spoken and sign language development of hearing children of deaf adults (codas). The spoken language competence of six codas within the age range of 3;10 to 6;4 is assessed by a series of standardised tests (SETK 3-5,…

  8. 125 The Fading Phase of Igbo Language and Culture: Path to its ...

    African Journals Online (AJOL)

    Tracie1

    favour of foreign language (and culture). They also ... native language, and children are unable to learn a language not spoken ... shielding them off their mother tongue”. ..... the effect endangered language has on the existence of the owners.

  9. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.

    2014-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  10. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.

    2015-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  11. Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity

    Science.gov (United States)

    Wong, Miranda Kit-Yi; So, Wing Chee

    2016-01-01

    This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…

  12. Language

    DEFF Research Database (Denmark)

    Sanden, Guro Refsum

    2016-01-01

    Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...... of a company. Language policies and/or strategies can be used to regulate a company’s internal modes of communication. Language management tools can be deployed to address existing and expected language needs. Continuous feedback from the front line ensures strategic learning and reduces the risk of suboptimal...

  13. Basic speech recognition for spoken dialogues

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-09-01

    Full Text Available Spoken dialogue systems (SDSs) have great potential for information access in the developing world. However, the realisation of that potential requires the solution of several challenging problems, including the development of sufficiently accurate...

  14. Leading the Proverbial Thirsty Horse to Water: ESL Learners’ Experience with Language Learning Contracts

    Directory of Open Access Journals (Sweden)

    Normah Ismail

    2012-12-01

    Full Text Available There is agreement among language educators that the process of language teaching and learning should aim to develop autonomous language learners. While the advantages of autonomy seem to be quite obvious, fostering autonomy in practice can prove to be difficult for some language learners. This paper describes the use of learning contracts as a strategy for enhancing learner autonomy among a group of ESL learners in a Malaysian university. Through learners’ account of their experiences with the contracts, the study concludes that the learning contract has potential use for language learning and that learners’ positive learning experience remains the key to the success of any endeavour seeking to promote learner autonomy. The paper ends with some implications for teachers and learners who wish to use the contracts as a strategy for language teaching and learning.

  15. Discharge experiences of speech-language pathologists working in Cyprus and Greece.

    Science.gov (United States)

    Kambanaros, Maria

    2010-08-01

    Post-termination relationships are complex because the client may need additional services and it may be difficult to determine when the speech-language pathologist-client relationship is truly terminated. In my contribution to this scientific forum, discharge experiences from speech-language pathologists working in Cyprus and Greece will be explored in search of commonalities and differences in the way in which pathologists end therapy from different cultural perspectives. Within this context the personal impact on speech-language pathologists of the discharge process will be highlighted. Inherent in this process is how speech-language pathologists learn to hold their feelings, anxieties and reactions when communicating discharge to clients. Overall speech-language pathologists working in Cyprus and Greece experience similar emotional responses to positive and negative therapy endings as speech-language pathologists working in Australia. The major difference is that Cypriot and Greek therapists face serious limitations in moving their clients on after therapy has ended.

  16. The gender congruency effect during bilingual spoken-word recognition

    Science.gov (United States)

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  17. Emotion and language learning: an exploration of experience and motivation in a Mexican university context

    OpenAIRE

    Méndez López, Mariza Guadalupe

    2011-01-01

    Although there have been numerous studies on motivation in foreign language learning and on emotions in general education, little research in foreign language learning have focused on the relation between motivation and learners' emotions (Maclntyre, 2002), as this shift to the affective side of motivation has only recently been suggested. Thus, this study aims to contribute to the body of knowledge on how foreign language learning motivation is shaped by emotional experiences. In order t...

  18. THE ROLE OF LANGUAGE GAME IN THE BUILDING UP OF A POLITICIAN'S IMAGE (PRAGMALINGUISTIC PERLOCUTIONARY EXPERIMENT)

    OpenAIRE

    Khanina E. A.

    2016-01-01

    The article discusses the results of the pragmalinguistic experiment. Since language game is a result of speech creative work, which manifests the individuality of a linguistic personality, the politician can intentionally use language game and thereby consciously form his attractive image. The politician, who uses different kinds of language game, makes some personal characteristics building up the portrait aspect of effective political image more distinguished and thus affects the election ...

  19. Guest Comment: Universal Language Requirement.

    Science.gov (United States)

    Sherwood, Bruce Arne

    1979-01-01

    Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)

  20. The effects of ethnicity, musicianship, and tone language experience on pitch perception.

    Science.gov (United States)

    Zheng, Yi; Samuel, Arthur G

    2018-02-01

    Language and music are intertwined: music training can facilitate language abilities, and language experiences can also help with some music tasks. Possible language-music transfer effects are explored in two experiments in this study. In Experiment 1, we tested native Mandarin, Korean, and English speakers on a pitch discrimination task with two types of sounds: speech sounds and fundamental frequency (F0) patterns derived from speech sounds. To control for factors that might influence participants' performance, we included cognitive ability tasks testing memory and intelligence. In addition, two music skill tasks were used to examine general transfer effects from language to music. Prior studies showing that tone language speakers have an advantage on pitch tasks have been taken as support for three alternative hypotheses: specific transfer effects, general transfer effects, and an ethnicity effect. In Experiment 1, musicians outperformed non-musicians on both speech and F0 sounds, suggesting a music-to-language transfer effect. Korean and Mandarin speakers performed similarly, and they both outperformed English speakers, providing some evidence for an ethnicity effect. Alternatively, this could be due to population selection bias. In Experiment 2, we recruited Chinese Americans approximating the native English speakers' language background to further test the ethnicity effect. Chinese Americans, regardless of their tone language experiences, performed similarly to their non-Asian American counterparts in all tasks. Therefore, although this study provides additional evidence of transfer effects across music and language, it casts doubt on the contribution of ethnicity to differences observed in pitch perception and general music abilities.

  1. Supporting English Language Arts Standards within the Context of Early Singing Experiences

    Science.gov (United States)

    Nordquist, Alice L.

    2015-01-01

    Music teachers may integrate a variety of English language arts content standards into their curriculum to enhance students' music experiences while also supporting their language development. John M. Feierabend and Melanie Champagne's picture book adaptation of "My Aunt Came Back" lends itself to multiple singing and discussion…

  2. Children's Faithfulness in Imitating Language Use Varies Cross-culturally, Contingent on Prior Experience

    Science.gov (United States)

    Klinger, Jörn; Mayor, Julien; Bannard, Colin

    2016-01-01

    Despite its recognized importance for cultural transmission, little is known about the role imitation plays in language learning. Three experiments examine how rates of imitation vary as a function of qualitative differences in the way language is used in a small indigenous community in Oaxaca, Mexico and three Western comparison groups. Data from…

  3. An Appraisal of the Importance of Graduates' Language Skills and ERASMUS Experiences

    Science.gov (United States)

    Mattern, Delfina

    2016-01-01

    This article discusses the importance of graduates' language skills and their European Regional Action Scheme for the Mobility of University Students (ERASMUS) experiences. The purpose of the research is to establish whether the potential benefits of ERASMUS participation for employability, particularly with regard to language skills, mean that…

  4. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  5. Language Anxiety: A Case Study of the Perceptions and Experiences of Students of English as a Foreign Language in a Higher Education Institution in the United Arab Emirates

    Science.gov (United States)

    Lababidi, Rola Ahmed

    2016-01-01

    This case study explores and investigates the perceptions and experiences of foreign language anxiety (FLA) among students of English as a Foreign Language in a Higher Education Institution in the United Arab Emirates. The first phase explored the scope and severity of language anxiety among all Foundation level male students at a college in the…

  6. Students' Source Misuse in Language Classrooms: Sharing Experiences

    Science.gov (United States)

    Fazel, Ismaeil; Kowkabi, Nasrin

    2013-01-01

    In this article we first provide a brief discussion of what is generally referred to as "student plagiarism," which we prefer to call "source misuse" or "inappropriate textual borrowing," and then provide some of the factors that may contribute to this problem in language classes. Moreover, we provide our views and…

  7. Elementary Physical Education Teachers' Experiences in Teaching English Language Learners

    Science.gov (United States)

    Sato, Takahiro; Hodge, Samuel R.

    2016-01-01

    The purpose of the current study was to describe and explain the views on teaching English Language Learners (ELLs) held by six elementary physical education (PE) teachers in the Midwest region of the United States. Situated in positioning theory, the research approach was descriptive-qualitative. The primary sources of data were face-to-face…

  8. "Small" languages in the European context: The Danish experience

    DEFF Research Database (Denmark)

    Lauridsen, Karen M.

    The paper briefly describes the training of translators and interpreters in Denmark, taking into consideration that fact that Danish is one of the less widely used and taught languages in the European Union and the implication this has for the training of professional translators/interpreters....

  9. Language core values in a multicultural setting: An Australian experience

    Science.gov (United States)

    Smolicz, Jerzy J.

    1991-03-01

    While it has been agreed by the members of the European Community (except the UK) that all secondary students should study two EC languages in addition to their own, in Australia the recent emphasis has been on teaching languages for external trade, particularly in the Asian region. This policy over-looks the 13 per cent of the Australian population who already speak a language other than English at home (and a greater number who are second generation immigrants), and ignores the view that it is necessary to foster domestic multiculturalism in order to have fruitful links with other cultures abroad. During the 1980s there have been moves to reinforce the cultural identity of Australians of non-English speaking background, but these have sometimes been half-hearted and do not fully recognise that cultural core values, including language, have to achieve a certain critical mass in order to be sustainable. Without this recognition, semi-assimilation will continue to waste the potential cultural and economic contributions of many citizens, and to lead to frustration and eventual violence. The recent National Agenda for a Multicultural Australia addresses this concern.

  10. Seeing conflict and engaging control: Experience with contrastive language benefits executive function in preschoolers.

    Science.gov (United States)

    Doebel, Sabine; Zelazo, Philip David

    2016-12-01

    Engaging executive function often requires overriding a prepotent response in favor of a conflicting but adaptive one. Language may play a key role in this ability by supporting integrated representations of conflicting rules. We tested whether experience with contrastive language that could support such representations benefits executive function in 3-year-old children. Children who received brief experience with language highlighting contrast between objects, attributes, and actions showed greater executive function on two of three 'conflict' executive function tasks than children who received experience with contrasting stimuli only and children who read storybooks with the experimenter, controlling for baseline executive function. Experience with contrasting stimuli did not benefit executive function relative to reading books with the experimenter, indicating experience with contrastive language, rather than experience with contrast generally, was key. Experience with contrastive language also boosted spontaneous attention to contrast, consistent with improvements in representing contrast. These findings indicate a role for language in executive function that is consistent with the Cognitive Complexity and Control theory's key claim that coordinating conflicting rules is critical to overcoming perseveration, and suggest new ideas for testing theories of executive function. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Language Learners Perceptions and Experiences on the Use of Mobile Applications for Independent Language Learning in Higher Education

    Directory of Open Access Journals (Sweden)

    Ana Niño

    2015-08-01

    Full Text Available With the widespread use of mobile phones and portable devices it is inevitable to think of Mobile Assisted Language Learning as a means of independent learning in Higher Education. Nowadays many learners are keen to explore the wide variety of applications available in their portable and always readily available mobile phones and tablets. The fact that they are keen to take control of their learning and autonomy is thought to lead to greater motivation and engagement, and the link with games-based learning suggests that the fun factor involved should not be overseen. This paper focuses on the use of mobile applications for independent language learning in higher education. It investigates how learners use mobile apps in line with their classes to enhance their learning experience. We base our analysis on a survey carried out in autumn 2013 in which 286 credited and non-credited language students from various levels of proficiency at The University of Manchester express their perceptions on the advantages and disadvantages of the use of mobile applications for independent language learning, together with examples of useful apps and suggestions of how these could be integrated in the language class.

  12. Use of spoken and written Japanese did not protect Japanese-American men from cognitive decline in late life.

    Science.gov (United States)

    Crane, Paul K; Gruhl, Jonathan C; Erosheva, Elena A; Gibbons, Laura E; McCurry, Susan M; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-11-01

    Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900-1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve.

  13. Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/

    Directory of Open Access Journals (Sweden)

    Socorro Cláudia Tavares de Sousa

    2009-01-01

    Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.

  14. Learning across Languages: Bilingual Experience Supports Dual Language Statistical Word Segmentation

    Science.gov (United States)

    Antovich, Dylan M.; Graf Estes, Katharine

    2018-01-01

    Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable-level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14-month-olds'…

  15. Implications of Hegel's Theories of Language on Second Language Teaching

    Science.gov (United States)

    Wu, Manfred

    2016-01-01

    This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…

  16. Inuit Sign Language: a contribution to sign language typology

    NARCIS (Netherlands)

    Schuit, J.; Baker, A.; Pfau, R.

    2011-01-01

    Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different

  17. Spanish as a Second Language when L1 Is Quechua: Endangered Languages and the SLA Researcher

    Science.gov (United States)

    Kalt, Susan E.

    2012-01-01

    Spanish is one of the most widely spoken languages in the world. Quechua is the largest indigenous language family to constitute the first language (L1) of second language (L2) Spanish speakers. Despite sheer number of speakers and typologically interesting contrasts, Quechua-Spanish second language acquisition is a nearly untapped research area,…

  18. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  19. The contribution of phonological knowledge, memory, and language background to reading comprehension in deaf populations

    Science.gov (United States)

    Hirshorn, Elizabeth A.; Dye, Matthew W. G.; Hauser, Peter; Supalla, Ted R.; Bavelier, Daphne

    2015-01-01

    While reading is challenging for many deaf individuals, some become proficient readers. Little is known about the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English,’ in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as memory measures that rely differentially on phonological (serial recall) and semantic (free recall) processing, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with free recall being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers. Highlights: 1. Deaf individuals vary in their orthographic and phonological knowledge of English as a function of their language experience. 2. Reading comprehension was best predicted by different factors in oral deaf and

  20. English and Mauritian Creole: A Reflection on How the Vocabulary, Grammar and Syntax of the Two Languages Create Difficulties for Learners

    OpenAIRE

    Kobita Kumari Jugnauth

    2018-01-01

    The purpose of this paper is to reflect on the various linguistic reasons that cause Mauritian students to experience difficulties while learning English. As Mauritius is a former British and French colony, most Mauritians are bilinguals. Both English and French are compulsory subjects up to Cambridge O’Level. English is the official language and also the language of instruction but French is much more widely used and spoken. Also Mauritian Creole is the mothertongue of the majority of Maurit...

  1. Structural borrowing: The case of Kenyan Sign Language (KSL) and ...

    African Journals Online (AJOL)

    Kenyan Sign Language (KSL) is a visual gestural language used by members of the deaf community in Kenya. Kiswahili on the other hand is a Bantu language that is used as the national language of Kenya. The two are world's apart, one being a spoken language and the other a signed language and thus their “… basic ...

  2. SADE: system of acquisition of experimental data. Definition and analysis of an experiment description language

    International Nuclear Information System (INIS)

    Gagniere, Jean-Michel

    1983-01-01

    This research thesis presents a computer system for the acquisition of experimental data. It is aimed at acquiring, at processing and at storing information from particle detectors. The acquisition configuration is described by an experiment description language. The system comprises a lexical analyser, a syntactic analyser, a translator, and a data processing module. It also comprises a control language and a statistics management and plotting module. The translator builds up series of tables which allow, during an experiment, different sequences to be executed: experiment running, calculations to be performed on this data, building up of statistics. Short execution time and ease of use are always looked for [fr

  3. In-service English language training for Italian Primary School Teachers An experience in syllabus design

    Directory of Open Access Journals (Sweden)

    Barbara Dawes

    2013-06-01

    Full Text Available The aim of this paper is to report on an in-service English Language Teacher Training Programme devised for the Government project to equip Italian primary school teachers  with the skills to teach English. The paper focuses on the first phase of the project which envisaged research into the best training models and the preparation of appropriate  English Language syllabuses. In  the first three sections of the paper we report on the experience of designing the language syllabus. In the last section we suggest ways of using the syllabus as a tool for self reflective professional development.

  4. Postschool Educational and Employment Experiences of Young People with Specific Language Impairment

    Science.gov (United States)

    Conti-Ramsden, Gina; Durkin, Kevin

    2012-01-01

    Purpose: This study examined the postschool educational and employment experiences of young people with and without specific language impairment (SLI). Method: Nineteen-year-olds with (n = 50) and without (n = 50) SLI were interviewed on their education and employment experiences since finishing compulsory secondary education. Results: On average,…

  5. The M-Learning Experience of Language Learners in Informal Settings

    Science.gov (United States)

    Sendurur, Emine; Efendioglu, Esra; Çaliskan, Neslihan Yondemir; Boldbaatar, Nomin; Kandin, Emine; Namazli, Sevinç

    2017-01-01

    This study is designed to understand the informal language learners' experiences of m-learning applications. The aim is two-folded: (i) to extract the reasons why m-learning applications are preferred and (ii) to explore the user experience of Duolingo m-learning application. We interviewed 18 voluntary Duolingo users. The findings suggest that…

  6. The impact of musical training and tone language experience on talker identification.

    Science.gov (United States)

    Xie, Xin; Myers, Emily

    2015-01-01

    Listeners can use pitch changes in speech to identify talkers. Individuals exhibit large variability in sensitivity to pitch and in accuracy perceiving talker identity. In particular, people who have musical training or long-term tone language use are found to have enhanced pitch perception. In the present study, the influence of pitch experience on talker identification was investigated as listeners identified talkers in native language as well as non-native languages. Experiment 1 was designed to explore the influence of pitch experience on talker identification in two groups of individuals with potential advantages for pitch processing: musicians and tone language speakers. Experiment 2 further investigated individual differences in pitch processing and the contribution to talker identification by testing a mediation model. Cumulatively, the results suggested that (a) musical training confers an advantage for talker identification, supporting a shared resources hypothesis regarding music and language and (b) linguistic use of lexical tones also increases accuracy in hearing talker identity. Importantly, these two types of hearing experience enhance talker identification by sharpening pitch perception skills in a domain-general manner.

  7. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    window focused over the part which most likely contains an answer to the query. The two systems are integrated into a full spoken query answering system. The prototype can answer queries and questions within the chosen football (soccer) test domain, but the system has the flexibility for being ported...

  8. SPOKEN AYACUCHO QUECHUA, UNITS 11-20.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THE ESSENTIALS OF AYACUCHO GRAMMAR WERE PRESENTED IN THE FIRST VOLUME OF THIS SERIES, SPOKEN AYACUCHO QUECHUA, UNITS 1-10. THE 10 UNITS IN THIS VOLUME (11-20) ARE INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE, AND PRESENT THE STUDENT WITH LENGTHIER AND MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS AS WELL…

  9. SPOKEN CUZCO QUECHUA, UNITS 7-12.

    Science.gov (United States)

    SOLA, DONALD F.; AND OTHERS

    THIS SECOND VOLUME OF AN INTRODUCTORY COURSE IN SPOKEN CUZCO QUECHUA ALSO COMPRISES ENOUGH MATERIAL FOR ONE INTENSIVE SUMMER SESSION COURSE OR ONE SEMESTER OF SEMI-INTENSIVE INSTRUCTION (120 CLASS HOURS). THE METHOD OF PRESENTATION IS ESSENTIALLY THE SAME AS IN THE FIRST VOLUME WITH FURTHER CONTRASTIVE, LINGUISTIC ANALYSIS OF ENGLISH-QUECHUA…

  10. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    Science.gov (United States)

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  11. SPOKEN AYACUCHO QUECHUA. UNITS 1-10.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THIS BEGINNING COURSE IN AYACUCHO QUECHUA, SPOKEN BY ABOUT A MILLION PEOPLE IN SOUTH-CENTRAL PERU, WAS PREPARED TO INTRODUCE THE PHONOLOGY AND GRAMMAR OF THIS DIALECT TO SPEAKERS OF ENGLISH. THE FIRST OF TWO VOLUMES, IT SERVES AS A TEXT FOR A 6-WEEK INTENSIVE COURSE OF 20 CLASS HOURS A WEEK. THE AUTHORS COMPARE AND CONTRAST SIGNIFICANT FEATURES OF…

  12. A Grammar of Spoken Brazilian Portuguese.

    Science.gov (United States)

    Thomas, Earl W.

    This is a first-year text of Portuguese grammar based on the Portuguese of moderately educated Brazilians from the area around Rio de Janeiro. Spoken idiomatic usage is emphasized. An important innovation is found in the presentation of verb tenses; they are presented in the order in which the native speaker learns them. The text is intended to…

  13. Towards Affordable Disclosure of Spoken Word Archives

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; Heeren, W.F.L.; Huijbregts, M.A.H.; Hiemstra, Djoerd; de Jong, Franciska M.G.; Larson, M; Fernie, K; Oomen, J; Cigarran, J.

    2008-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken word archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, the least we want to be

  14. Towards Affordable Disclosure of Spoken Heritage Archives

    NARCIS (Netherlands)

    Larson, M; Ordelman, Roeland J.F.; Heeren, W.F.L.; Fernie, K; de Jong, Franciska M.G.; Huijbregts, M.A.H.; Oomen, J; Hiemstra, Djoerd

    2009-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken heritage archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, we at least want to

  15. Mapping Students' Spoken Conceptions of Equality

    Science.gov (United States)

    Anakin, Megan

    2013-01-01

    This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…

  16. Parental mode of communication is essential for speech and language outcomes in cochlear implanted children

    DEFF Research Database (Denmark)

    Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina

    2010-01-01

    The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....

  17. Business Spoken English Learning Strategies for Chinese Enterprise Staff

    Institute of Scientific and Technical Information of China (English)

    Han Li

    2013-01-01

    This study addresses the issue of promoting effective Business Spoken English of Enterprise Staff in China.It aims to assess the assessment of spoken English learning methods and identify the difficulties of learning English oral expression concerned business area.It also provides strategies for enhancing Enterprise Staff’s level of Business Spoken English.

  18. Language and Literacy: The Case of India.

    Science.gov (United States)

    Sridhar, Kamal K.

    Language and literacy issues in India are reviewed in terms of background, steps taken to combat illiteracy, and some problems associated with literacy. The following facts are noted: India has 106 languages spoken by more than 685 million people, there are several minor script systems, a major language has different dialects, a language may use…

  19. Areas Recruited during Action Understanding Are Not Modulated by Auditory or Sign Language Experience.

    Science.gov (United States)

    Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao

    2016-01-01

    The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.

  20. U.S. Airline Transport Pilot International Flight Language Experiences, Report 3: Language Experiences in Non-Native English-Speaking Airspace/Airports

    Science.gov (United States)

    2010-05-01

    a speaker is also affected by hearing loss. Some of the symptoms of age-related hearing loss include: (1) Difficulty understanding spoken words...words seem like one gigantic word to me. I can’t figure out where the words break apart. It seems to me that we’ll ask for repeats from female

  1. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  2. Teachers’ attitudes, perceptions and experiences in CLIL: A look at content and language

    Directory of Open Access Journals (Sweden)

    Jermaine S. McDougald

    2015-05-01

    Full Text Available This paper is a preliminary report on the “CLIL State-of-the-Art” project in Colombia, drawing on data collected from 140 teachers’ regarding their attitudes toward, perceptions of and experiences with CLIL (content and language integrated learning. The term CLIL is used here to refer to teaching contexts in which a foreign language (in these cases, English is the medium for the teaching and learning of non-language subjects. The data that has been gathered thus far reveals that while teachers presently know very little about CLIL, they are nevertheless actively seeking informal and formal instruction on CLIL. Many of the surveyed teachers are currently teaching content areas through English; approximately half of them reported having had positive experiences teaching content and language together, though the remainder claimed to lack sufficient knowledge in content areas. Almost all of the participants agreed that the CLIL approach can benefit students, helping them develop both language skills and subject knowledge (meaningful communication. However, there is still considerable uncertainty as to the actual state-of-the-art of CLIL in Colombia; greater clarity here will enable educators and decision-makers to make sound decisions for the future of general and language education.

  3. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  4. Spoken sentence production in college students with dyslexia: working memory and vocabulary effects.

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J P

    2018-03-01

    Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.

  5. Experience gained in running the EPRI MMS code with an in-house simulation language

    International Nuclear Information System (INIS)

    Weber, D.S.

    1987-01-01

    The EPRI Modular Modeling System (MMS) code represents a collection of component models and a steam/water properties package. This code has undergone extensive verification and validation testing. Currently, the code requires a commercially available simulation language to run. The Philadelphia Electric Company (PECO) has been modeling power plant systems for over the past sixteen years. As a result, an extensive number of models have been developed. In addition, an extensive amount of experience has been developed and gained using an in-house simulation language. The objective of this study was to explore the possibility of developing an MMS pre-processor which would allow the use of the MMS package with other simulation languages such as the PECO in-house simulation language

  6. Linguistic Identity Positioning in Facebook Posts During Second Language Study Abroad: One Teen’s Language Use, Experience, and Awareness

    Directory of Open Access Journals (Sweden)

    Roswita Dressler

    2016-12-01

    Full Text Available Abstract Teens who post on the popular social networking site Facebook in their home environment often continue to do so on second language study abroad sojourns. These sojourners use Facebook to document and make sense of their experiences in the host culture and position themselves with respect to language(s and culture(s. This study examined one teen’s identity positioning through her Facebook posts from two separate study abroad experiences in Germany. Data sources included her Facebook posts from both sojourns and a written reflection completed upon return from the second sojourn. Findings revealed that this teen used Facebook posts to position herself as a German-English bilingual and a member of an imagined community of German-English bilinguals by making a choice on which language(s to use, reporting her linguistic successes and challenges, and indicating growing language awareness. This study addresses the call by study abroad researchers (Coleman, 2013; Kinginger, 2009, 2013; Mitchell, Tracy-Ventura, & McManus, 2015 to investigate the effects of social media, such as Facebook, as part of the contemporary culture of study abroad, and sheds light on the role it plays, especially regarding second language identity positioning. Résumé Les adolescents qui affichent sur le site social Facebook dans leur environnement familial continuent à le faire pendant leur séjour à l'étranger. Ces adolescents utilisent Facebook pour documenter et réfléchir sur leurs expériences dans le pays hôte et pour se positionner par rapport à leur langue et à leur culture ou aux langues et aux cultures. Cette étude a examiné le positionnement d'une adolescente par rapport à son identité à travers des messages Facebook lors de deux séjours différents en Allemagne. Les données de ces expériences incluent des messages Facebook provenant des deux séjours et une réflexion écrite complétée à son retour du deuxième séjour. Les résultats ont

  7. Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language

    Science.gov (United States)

    Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary

    2015-01-01

    Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…

  8. Componential Skills in Second Language Development of Bilingual Children with Specific Language Impairment

    Science.gov (United States)

    Verhoeven, Ludo; Steenge, Judit; van Leeuwe, Jan; van Balkom, Hans

    2017-01-01

    In this study, we investigated which componential skills can be distinguished in the second language (L2) development of 140 bilingual children with specific language impairment in the Netherlands, aged 6-11 years, divided into 3 age groups. L2 development was assessed by means of spoken language tasks representing different language skills…

  9. Family Language Policy and School Language Choice: Pathways to Bilingualism and Multilingualism in a Canadian Context

    Science.gov (United States)

    Slavkov, Nikolay

    2017-01-01

    This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…

  10. Language-mediated visual orienting behavior in low and high literates

    Directory of Open Access Journals (Sweden)

    Falk eHuettig

    2011-10-01

    Full Text Available The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005 which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors. In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1. In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2 but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2, low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate

  11. Supporting Academic Language Development in Elementary Science: A Classroom Teaching Experiment

    Science.gov (United States)

    Jung, Karl Gerhard

    Academic language is the language that students must engage in while participating in the teaching and learning that takes place in school (Schleppegrell, 2012) and science as a content area presents specific challenges and opportunities for students to engage with language (Buxton & Lee, 2014; Gee, 2005). In order for students to engage authentically and fully in the science learning that will take place in their classrooms, it is important that they develop their abilities to use science academic language (National Research Council, 2012). For this to occur, teachers must provide support to their students in developing the science academic language they will encounter in their classrooms. Unfortunately, this type of support remains a challenge for many teachers (Baecher, Farnsworth, & Ediger, 2014; Bigelow, 2010; Fisher & Frey, 2010) and teachers must receive professional development that supports their abilities to provide instruction that supports and scaffolds students' science academic language use and development. This study investigates an elementary science teacher's engagement in an instructional coaching partnership to explore how that teacher planned and implemented scaffolds for science academic language. Using a theoretical framework that combines the literature on scaffolding (Bunch, Walqui, & Kibler, 2015; Gibbons, 2015; Sharpe, 2001/2006) and instructional coaching (Knight, 2007/2009), this study sought to understand how an elementary science teacher plans and implements scaffolds for science academic language, and the resources that assisted the teacher in planning those scaffolds. The overarching goal of this work is to understand how elementary science teachers can scaffold language in their classroom, and how they can be supported in that work. Using a classroom teaching experiment methodology (Cobb, 2000) and constructivist grounded theory methods (Charmaz, 2014) for analysis, this study examined coaching conversations and classroom

  12. Evaluating spoken dialogue systems according to de-facto standards: A case study

    NARCIS (Netherlands)

    Möller, S.; Smeele, P.; Boland, H.; Krebber, J.

    2007-01-01

    In the present paper, we investigate the validity and reliability of de-facto evaluation standards, defined for measuring or predicting the quality of the interaction with spoken dialogue systems. Two experiments have been carried out with a dialogue system for controlling domestic devices. During

  13. Orthographic consistency affects spoken word recognition at different grain-sizes

    DEFF Research Database (Denmark)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previo...

  14. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  15. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    Science.gov (United States)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  16. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    Science.gov (United States)

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  17. Fourth International Workshop on Spoken Dialog Systems

    CERN Document Server

    Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice

    2014-01-01

    These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.

  18. Long-Term Experience with Chinese Language Shapes the Fusiform Asymmetry of English Reading

    Science.gov (United States)

    Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; Chen, Chuansheng; Wei, Miao; He, Qinghua; Dong, Qi

    2015-01-01

    Previous studies have suggested differential engagement of the bilateral fusiform gyrus in the processing of Chinese and English. The present study tested the possibility that long-term experience with Chinese language affects the fusiform laterality of English reading by comparing three samples: Chinese speakers, English speakers with Chinese experience, and English speakers without Chinese experience. We found that, when reading words in their respective native language, Chinese and English speakers without Chinese experience differed in functional laterality of the posterior fusiform region (right laterality for Chinese speakers, but left laterality for English speakers). More importantly, compared with English speakers without Chinese experience, English speakers with Chinese experience showed more recruitment of the right posterior fusiform cortex for English words and pseudowords, which is similar to how Chinese speakers processed Chinese. These results suggest that long-term experience with Chinese shapes the fusiform laterality of English reading and have important implications for our understanding of the cross-language influences in terms of neural organization and of the functions of different fusiform subregions in reading. PMID:25598049

  19. Micro Language Planning and Cultural Renaissance in Botswana

    Science.gov (United States)

    Alimi, Modupe M.

    2016-01-01

    Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…

  20. Enhancing Children's Language Learning and Cognition Experience through Interactive Kinetic Typography

    Science.gov (United States)

    Lau, Newman M. L.; Chu, Veni H. T.

    2015-01-01

    This research aimed at investigating the method of using kinetic typography and interactive approach to conduct a design experiment for children to learn vocabularies. Typography is the unique art and technique of arranging type in order to make language visible. By adding animated movement to characters, kinetic typography expresses language…

  1. Language learning experience in school context and metacognitive awareness of multilingual children

    NARCIS (Netherlands)

    Le Pichon Vorstman, E.; de Swart, H.; Ceginskas, V.; van den Bergh, H.

    2009-01-01

    What is the influence of a language learning experience (LLE) in a school context on the metacognitive development of children? To answer that question, we presented 54 multilingual preschoolers with two movie clips and examined their reactions to an exolingual situation of communication. These

  2. What Is the Participant Learning Experience Like Using YouTube to Study a Foreign Language?

    Science.gov (United States)

    Lo, Yuan-Hsiang

    2012-01-01

    This research is to explore and understand participants' experience using YouTube to learn a foreign language. YouTube and learning has become more and more popular in the recent years. The finding of this research will be adding more understanding to the emerging body of knowledge of YouTube phenomenon. In this research, there are three…

  3. Word Learning in Adults with Second-Language Experience: Effects of Phonological and Referent Familiarity

    Science.gov (United States)

    Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie

    2013-01-01

    Purpose: The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar versus unfamiliar referents and whether successful word learning is associated with increased second-language experience. Method: Eighty-one adult native English speakers with various levels of Spanish…

  4. Word learning in adults with second language experience: Effects of phonological and referent familiarity

    Science.gov (United States)

    Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie

    2014-01-01

    Purpose The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar vs. unfamiliar referents, and whether successful word-learning is associated with increased second-language experience. Method Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically-familiar novel words (constructed using English sounds) or phonologically-unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition-task. A median-split procedure identified high-ability and low-ability word-learners in each condition, and the two groups were compared on measures of second-language experience. Results Findings suggest that the ability to accurately match newly-learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: Where phonologically-unfamiliar novel words were paired with familiar referents. Conclusions Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents, and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults. PMID:22992709

  5. Experiences of Student Speech-Language Pathology Clinicians in the Initial Clinical Practicum: A Phenomenological Study

    Science.gov (United States)

    Nelson, Lori A.

    2011-01-01

    Speech-language pathology literature is limited in describing the clinical practicum process from the student perspective. Much of the supervision literature in this field focuses on quantitative research and/or the point of view of the supervisor. Understanding the student experience serves to enhance the quality of clinical supervision. Of…

  6. Language, Institutional Identity and Integration: Lived Experiences of ESL Teachers in Australia

    Science.gov (United States)

    Fotovatian, Sepideh

    2015-01-01

    Globalisation and increased patterns of immigration have turned workplace interactions to arenas for intercultural communication entailing negotiation of identity, membership and "social capital". For many newcomer immigrants, this happens in an additional language and culture--English. This paper presents interaction experiences of four…

  7. Language development of internationally adopted children: Adverse early experiences outweigh the age of acquisition effect.

    Science.gov (United States)

    Rakhlin, Natalia; Hein, Sascha; Doyle, Niamh; Hart, Lesley; Macomber, Donna; Ruchkin, Vladislav; Tan, Mei; Grigorenko, Elena L

    2015-01-01

    We compared English language and cognitive skills between internationally adopted children (IA; mean age at adoption=2.24, SD=1.8) and their non-adopted peers from the US reared in biological families (BF) at two time points. We also examined the relationships between outcome measures and age at initial institutionalization, length of institutionalization, and age at adoption. On measures of general language, early literacy, and non-verbal IQ, the IA group performed significantly below their age-peers reared in biological families at both time points, but the group differences disappeared on receptive vocabulary and kindergarten concept knowledge at the second time point. Furthermore, the majority of children reached normative age expectations between 1 and 2 years post-adoption on all standardized measures. Although the age at adoption, age of institutionalization, length of institutionalization, and time in the adoptive family all demonstrated significant correlations with one or more outcome measures, the negative relationship between length of institutionalization and child outcomes remained most robust after controlling for the other variables. Results point to much flexibility and resilience in children's capacity for language acquisition as well as the potential primacy of length of institutionalization in explaining individual variation in IA children's outcomes. (1) Readers will be able to understand the importance of pre-adoption environment on language and early literacy development in internationally adopted children. (2) Readers will be able to compare the strength of the association between the length of institutionalization and language outcomes with the strength of the association between the latter and the age at adoption. (3) Readers will be able to understand that internationally adopted children are able to reach age expectations on expressive and receptive language measures despite adverse early experiences and a replacement of their first

  8. THE RECOGNITION OF SPOKEN MONO-MORPHEMIC COMPOUNDS IN CHINESE

    Directory of Open Access Journals (Sweden)

    Yu-da Lai

    2012-12-01

    Full Text Available This paper explores the auditory lexical access of mono-morphemic compounds in Chinese as a way of understanding the role of orthography in the recognition of spoken words. In traditional Chinese linguistics, a compound is a word written with two or more characters whether or not they are morphemic. A monomorphemic compound may either be a binding word, written with characters that only appear in this one word, or a non-binding word, written with characters that are chosen for their pronunciation but that also appear in other words. Our goal was to determine if this purely orthographic difference affects auditory lexical access by conducting a series of four experiments with materials matched by whole-word frequency, syllable frequency, cross-syllable predictability, cohort size, and acoustic duration, but differing in binding. An auditory lexical decision task (LDT found an orthographic effect: binding words were recognized more quickly than non-binding words. However, this effect disappeared in an auditory repetition and in a visual LDT with the same materials, implying that the orthographic effect during auditory lexical access was localized to the decision component and involved the influence of cross-character predictability without the activation of orthographic representations. This claim was further confirmed by overall faster recognition of spoken binding words in a cross-modal LDT with different types of visual interference. The theoretical and practical consequences of these findings are discussed.

  9. Schools and Languages in India.

    Science.gov (United States)

    Harrison, Brian

    1968-01-01

    A brief review of Indian education focuses on special problems caused by overcrowded schools, insufficient funding, and the status of education itself in the Indian social structure. Language instruction in India, a complex issue due largely to the numerous official languages currently spoken, is commented on with special reference to the problem…

  10. Phonological reduplication in sign language: rules rule

    Directory of Open Access Journals (Sweden)

    Iris eBerent

    2014-06-01

    Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.

  11. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  12. Assessing Language Attitudes through a Matched-Guise Experiment: The Case of Consonantal Deletion in Venezuelan Spanish

    Science.gov (United States)

    Diaz-Campos, Manuel; Killam, Jason

    2012-01-01

    This investigation contributes to the understanding of language attitudes toward consonantal deletion by examining its perception using a matched-guise experiment (Casesnoves and Sankoff 2004; Lambert, Hodgson, Gardner, and Fillenbaum 1960) with fifteen listeners. Two experiments were designed for testing language attitudes, one toward…

  13. Korean speech-language pathologists' attitudes toward stuttering according to clinical experiences.

    Science.gov (United States)

    Lee, Kyungjae

    2014-11-01

    Negative attitudes toward stuttering and people who stutter (PWS) are found in various groups of people in many regions. However the results of previous studies examining the influence of fluency coursework and clinical certification on the attitudes of speech-language pathologists (SLPs) toward PWS are equivocal. Furthermore, there have been few empirical studies on the attitudes of Korean SLPs toward stuttering. To determine whether the attitudes of Korean SLPs and speech-language pathology students toward stuttering would be different according to the status of clinical certification, stuttering coursework completion and clinical practicum in stuttering. Survey data from 37 certified Korean SLPs and 70 undergraduate students majoring in speech-language pathology were analysed. All the participants completed the modified Clinician Attitudes Toward Stuttering (CATS) Inventory. Results showed that the diagnosogenic view was still accepted by many participants. Significant differences were found in seven out of 46 CATS Inventory items according to the certification status. In addition significant differences were also found in three items and one item according to stuttering coursework completion and clinical practicum experience in stuttering, respectively. Clinical and educational experience appears to have mixed influences on SLPs' and students' attitudes toward stuttering. While SLPs and students may demonstrate more appropriate understanding and knowledge in certain areas of stuttering, they may feel difficulty in their clinical experience, possibly resulting in low self-efficacy. © 2014 Royal College of Speech and Language Therapists.

  14. Language experience differentiates prefrontal and subcortical activation of the cognitive control network in novel word learning.

    Science.gov (United States)

    Bradley, Kailyn A L; King, Kelly E; Hernandez, Arturo E

    2013-02-15

    The purpose of this study was to examine the cognitive control mechanisms in adult English speaking monolinguals compared to early sequential Spanish-English bilinguals during the initial stages of novel word learning. Functional magnetic resonance imaging during a lexico-semantic task after only 2h of exposure to novel German vocabulary flashcards showed that monolinguals activated a broader set of cortical control regions associated with higher-level cognitive processes, including the supplementary motor area (SMA), anterior cingulate (ACC), and dorsolateral prefrontal cortex (DLPFC), as well as the caudate, implicated in cognitive control of language. However, bilinguals recruited a more localized subcortical network that included the putamen, associated more with motor control of language. These results suggest that experience managing multiple languages may differentiate the learning strategy and subsequent neural mechanisms of cognitive control used by bilinguals compared to monolinguals in the early stages of novel word learning. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Pronoun forms and courtesy in spoken language in Tunja, Colombia

    Directory of Open Access Journals (Sweden)

    Gloria Avendaño de Barón

    2014-05-01

    Full Text Available This article presents the results of a research project whose aims were the following: to determine the frequency of the use of pronoun forms in polite treatment sumercé, usted and tú, according to differences in gender, age and level of education, among speakers in Tunja; to describe the sociodiscursive variations and to explain the relationship between usage and courtesy. The methodology of the Project for the Sociolinguistic Study of Spanish in Spain and in Latin America (PRESEEA was used, and a sample of 54 speakers was taken. The results indicate that the most frequently used pronoun in Tunja to express friendliness and affection is sumercé, followed by usted and tú; women and men of different generations and levels of education alternate the use of these three forms in the context of narrative, descriptive, argumentative and explanatory speech.

  16. Porting a spoken language identification systen to a new environment.

    CSIR Research Space (South Africa)

    Peche, M

    2008-11-01

    Full Text Available A speech processing system is often required to perform in a different environment than the one for which it was initially developed. In such a case, data from the new environment may be more limited in quantity and of poorer quality than...

  17. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Kate H

    10. 10. Content. 25. Conveying interaction between lecturer and students. 5. 5. Managing equipment. Breath control and volume. 5. Intonation and voice quality. Use of coping techniques. Pronunciation and general clarity. Error correction. Fluency of interpreting product (hesitations, silences etc.) Interpreting competency. 15.

  18. Spoken language identification system adaptation in under-resourced environments

    CSIR Research Space (South Africa)

    Kleynhans, N

    2013-12-01

    Full Text Available Speech Recognition (ASR) systems in the developing world is severely inhibited. Given that few task-specific corpora exist and speech technology systems perform poorly when deployed in a new environment, we investigate the use of acoustic model adaptation...

  19. Endowing Spoken Language Dialogue System with Emotional Intelligence

    DEFF Research Database (Denmark)

    André, Elisabeth; Rehm, Matthias; Minker, Wolfgang

    2004-01-01

    While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic variations in order to improve the user’s perception of the interaction. To accomplish this goal, our approach integrates a social theory of polite...... of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available....

  20. Usable, Real-Time, Interactive Spoken Language Systems

    Science.gov (United States)

    1994-09-01

    Similarly, we included derivations (mostly plurals and possessives) of many open-class words in the domnain. We also added about 400 concatenated word...UueraiCe’l~ usinig a system of’ ’realization 1111C, %%. hiCh map) thle gr-aimmlatcal relation anl argumlent bears to the head onto thle semantic relatio ...syntactic categories as well. Representations of this form contain significantly more internal structure than specialized sublanguage models. This can be

  1. "Connecting to My Roots": Filipino American Students' Language Experiences in the U.S. and in the Heritage Language Class

    OpenAIRE

    Angeles, Bianca C.

    2015-01-01

    Filipinos are one of the biggest minority populations in California, yet there are limited opportunities to learn the Filipino language in public schools. Further, schools are not able to nurture students’ heritage languages because of increased emphasis on English-only proficiency. The availability of heritage language classes at the university level – while scarce – therefore becomes an important space for Filipino American students to (re)learn and (re)discover their language and identity....

  2. The growth of language: Universal Grammar, experience, and principles of computation.

    Science.gov (United States)

    Yang, Charles; Crain, Stephen; Berwick, Robert C; Chomsky, Noam; Bolhuis, Johan J

    2017-10-01

    Human infants develop language remarkably rapidly and without overt instruction. We argue that the distinctive ontogenesis of child language arises from the interplay of three factors: domain-specific principles of language (Universal Grammar), external experience, and properties of non-linguistic domains of cognition including general learning mechanisms and principles of efficient computation. We review developmental evidence that children make use of hierarchically composed structures ('Merge') from the earliest stages and at all levels of linguistic organization. At the same time, longitudinal trajectories of development show sensitivity to the quantity of specific patterns in the input, which suggests the use of probabilistic processes as well as inductive learning mechanisms that are suitable for the psychological constraints on language acquisition. By considering the place of language in human biology and evolution, we propose an approach that integrates principles from Universal Grammar and constraints from other domains of cognition. We outline some initial results of this approach as well as challenges for future research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Language processing abnormalities in adolescents with psychotic-like experiences: An event related potential study.

    LENUS (Irish Health Repository)

    Murphy, Jennifer

    2012-05-01

    Language impairments are a well established finding in patients with schizophrenia and in individuals at-risk for psychosis. A growing body of research has revealed shared risk factors between individuals with psychotic-like experiences (PLEs) from the general population and patients with schizophrenia. In particular, adolescents with PLEs have been shown to be at an increased risk for later psychosis. However, to date there has been little information published on electrophysiological correlates of language comprehension in this at-risk group. A 64 channel EEG recorded electrical activity while 37 (16 At-Risk; 21 Controls) participants completed the British Picture Vocabulary Scale (BPVS-II) receptive vocabulary task. The P300 component was examined as a function of language comprehension. The at-risk group were impaired behaviourally on receptive language and were characterised by a reduction in P300 amplitude relative to the control group. The results of this study reveal electrophysiological evidence for receptive language deficits in adolescents with PLEs, suggesting that the earliest neurobiological changes underlying psychosis may be apparent in the adolescent period.

  4. Response to dynamic language tasks among typically developing Latino preschool children with bilingual experience.

    Science.gov (United States)

    Patterson, Janet L; Rodríguez, Barbara L; Dale, Philip S

    2013-02-01

    The purpose of this study was to determine whether typically developing preschool children with bilingual experience show evidence of learning within brief dynamic assessment language tasks administered in a graduated prompting framework. Dynamic assessment has shown promise for accurate identification of language impairment in bilingual children, and a graduated prompting approach may be well-suited to screening for language impairment. Three dynamic language tasks with graduated prompting were presented to 32 typically developing 4-year-olds in the language to which the child had the most exposure (16 Spanish, 16 English). The tasks were a novel word learning task, a semantic task, and a phonological awareness task. Children's performance was significantly higher on the last 2 items compared with the first 2 items for the semantic and the novel word learning tasks among children who required a prompt on the 1st item. There was no significant difference between the 1st and last items on the phonological awareness task. Within-task improvements in children's performance for some tasks administered within a brief, graduated prompting framework were observed. Thus, children's responses to graduated prompting may be an indicator of modifiability, depending on the task type and level of difficulty.

  5. MINORITY LANGUAGES IN ESTONIAN SEGREGATIVE LANGUAGE ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    Elvira Küün

    2011-01-01

    Full Text Available The goal of this project in Estonia was to determine what languages are spoken by students from the 2nd to the 5th year of basic school at their homes in Tallinn, the capital of Estonia. At the same time, this problem was also studied in other segregated regions of Estonia: Kohtla-Järve and Maardu. According to the database of the population census from the year 2000 (Estonian Statistics Executive Office's census 2000, there are representatives of 142 ethnic groups living in Estonia, speaking a total of 109 native languages. At the same time, the database doesn’t state which languages are spoken at homes. The material presented in this article belongs to the research topic “Home Language of Basic School Students in Tallinn” from years 2007–2008, specifically financed and ordered by the Estonian Ministry of Education and Research (grant No. ETF 7065 in the framework of an international study called “Multilingual Project”. It was determined what language is dominating in everyday use, what are the factors for choosing the language for communication, what are the preferred languages and language skills. This study reflects the actual trends of the language situation in these cities.

  6. Strangers in Stranger Lands: Language, Learning, Culture

    Directory of Open Access Journals (Sweden)

    Hong Li

    2007-02-01

    Full Text Available This study investigates international students’ perceptions of the issues they face using English as a second language while attending American higher education institutions. In order to fully understand those challenges involved in learning English as a Second Language, it is necessary to know the extent to which international students have mastered the English language before they start their study in America. Most international students experience an overload of English language input upon arrival in the United States. Cultural differences influence international students’ learning of English in other ways, including international students’ isolation within their communities and America’s lack of teaching listening skills to its own students. Other factors also affect international students’ learning of English, such as the many forms of informal English spoken in the USA, as well as a variety of dialects. Moreover, since most international students have learned English in an environment that precluded much contact with spoken English, they often speak English with an accent that reveals their own language. This study offers informed insight into the complicated process of simultaneously learning the language and culture of another country. Readers will find three main voices in addition to the international students who “speak” (in quotation marks throughout this article. Hong Li, a Chinese doctoral student in English Education at the University of Missouri-Columbia, authored the “regular” text. Second, Roy F. Fox’s voice appears in italics. Fox is Professor of English Education and Chair of the Department of Learning, Teaching, and Curriculum at the University of Missouri-Columbia. Third, Dario J. Almarza’s voice appears in boldface. Almarza, a native of Venezuela, is an Assistant Professor of Social Studies Education at the same institution.

  7. Experience with a second language affects the use of fundamental frequency in speech segmentation

    Science.gov (United States)

    Broersma, Mirjam; Cho, Taehong; Kim, Sahyang; Martínez-García, Maria Teresa; Connell, Katrina

    2017-01-01

    This study investigates whether listeners’ experience with a second language learned later in life affects their use of fundamental frequency (F0) as a cue to word boundaries in the segmentation of an artificial language (AL), particularly when the cues to word boundaries conflict between the first language (L1) and second language (L2). F0 signals phrase-final (and thus word-final) boundaries in French but word-initial boundaries in English. Participants were functionally monolingual French listeners, functionally monolingual English listeners, bilingual L1-English L2-French listeners, and bilingual L1-French L2-English listeners. They completed the AL-segmentation task with F0 signaling word-final boundaries or without prosodic cues to word boundaries (monolingual groups only). After listening to the AL, participants completed a forced-choice word-identification task in which the foils were either non-words or part-words. The results show that the monolingual French listeners, but not the monolingual English listeners, performed better in the presence of F0 cues than in the absence of such cues. Moreover, bilingual status modulated listeners’ use of F0 cues to word-final boundaries, with bilingual French listeners performing less accurately than monolingual French listeners on both word types but with bilingual English listeners performing more accurately than monolingual English listeners on non-words. These findings not only confirm that speech segmentation is modulated by the L1, but also newly demonstrate that listeners’ experience with the L2 (French or English) affects their use of F0 cues in speech segmentation. This suggests that listeners’ use of prosodic cues to word boundaries is adaptive and non-selective, and can change as a function of language experience. PMID:28738093

  8. Learning a generative probabilistic grammar of experience: a process-level model of language acquisition.

    Science.gov (United States)

    Kolodny, Oren; Lotem, Arnon; Edelman, Shimon

    2015-03-01

    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this manner takes the form of a directed weighted graph, whose nodes are recursively (hierarchically) defined patterns over the elements of the input stream. We evaluated the model in seventeen experiments, grouped into five studies, which examined, respectively, (a) the generative ability of grammar learned from a corpus of natural language, (b) the characteristics of the learned representation, (c) sequence segmentation and chunking, (d) artificial grammar learning, and (e) certain types of structure dependence. The model's performance largely vindicates our design choices, suggesting that progress in modeling language acquisition can be made on a broad front-ranging from issues of generativity to the replication of human experimental findings-by bringing biological and computational considerations, as well as lessons from prior efforts, to bear on the modeling approach. Copyright © 2014 Cognitive Science Society, Inc.

  9. Vietnamese American Experiences of English Language Learning: Ethnic Acceptance and Prejudice

    Directory of Open Access Journals (Sweden)

    Jeffrey LaBelle

    2007-01-01

    Full Text Available This article investigates the effects of ethnic acceptance and prejudice on English language learning among immigrant nonnative speakers. During 2004 and 2005, the author conducted participatory dialogues among six Vietnamese and Mexican adult immigrant English language learners. The researcher sought to answer five questions: (1 What are some nonnative English speakers’ experience regarding the way native speakers treat them? (2 How have nonnative English speakers’ experiences of ethnic acceptance or ethnic prejudice affected their learning of English? (3 What do nonnative English speakers think they need in order to lower their anxiety as they learn a new language? (4 What can native English speakers do to lower nonnative speakers’ anxiety? (5 What can nonnative English speakers do to lower their anxiety with native English speakers? Even though many of the adult immigrant participants experienced ethnic prejudice, they developed strategies to overcome anxiety, frustration, and fear. The dialogues generated themes of acceptance, prejudice, power, motivation, belonging, and perseverance, all factors essential to consider when developing English language learning programs for adult immigrants.

  10. When words fail us: insights into language processing from developmental and acquired disorders.

    Science.gov (United States)

    Bishop, Dorothy V M; Nation, Kate; Patterson, Karalyn

    2014-01-01

    Acquired disorders of language represent loss of previously acquired skills, usually with relatively specific impairments. In children with developmental disorders of language, we may also see selective impairment in some skills; but in this case, the acquisition of language or literacy is affected from the outset. Because systems for processing spoken and written language change as they develop, we should beware of drawing too close a parallel between developmental and acquired disorders. Nevertheless, comparisons between the two may yield new insights. A key feature of connectionist models simulating acquired disorders is the interaction of components of language processing with each other and with other cognitive domains. This kind of model might help make sense of patterns of comorbidity in developmental disorders. Meanwhile, the study of developmental disorders emphasizes learning and change in underlying representations, allowing us to study how heterogeneity in cognitive profile may relate not just to neurobiology but also to experience. Children with persistent language difficulties pose challenges both to our efforts at intervention and to theories of learning of written and spoken language. Future attention to learning in individuals with developmental and acquired disorders could be of both theoretical and applied value.

  11. Recording voiceover the spoken word in media

    CERN Document Server

    Blakemore, Tom

    2015-01-01

    The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f

  12. On the Usability of Spoken Dialogue Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Bo

     This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...

  13. The Functional Organisation of the Fronto-Temporal Language System: Evidence from Syntactic and Semantic Ambiguity

    Science.gov (United States)

    Rodd, Jennifer M.; Longe, Olivia A.; Randall, Billi; Tyler, Lorraine K.

    2010-01-01

    Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities…

  14. Notes from the Field: Lolak--Another Moribund Language of Indonesia, with Supporting Audio

    Science.gov (United States)

    Lobel, Jason William; Paputungan, Ade Tatak

    2017-01-01

    This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…

  15. The contribution of phonological knowledge, memory, and language background to reading comprehension in deaf populations

    Directory of Open Access Journals (Sweden)

    Elizabeth Ann Hirshorn

    2015-08-01

    Full Text Available While reading is challenging for many deaf individuals, some become proficient readers. Yet we do not know the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English’, in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as verbal short-term memory and long-term memory skills, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with long-term memory, as measured by free recall, being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers.

  16. Teaching English as a "Second Language" in Kenya and the United States: Convergences and Divergences

    Science.gov (United States)

    Roy-Campbell, Zaline M.

    2015-01-01

    English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…

  17. Bridging the Gap: The Development of Appropriate Educational Strategies for Minority Language Communities in the Philippines

    Science.gov (United States)

    Dekker, Diane; Young, Catherine

    2005-01-01

    There are more than 6000 languages spoken by the 6 billion people in the world today--however, those languages are not evenly divided among the world's population--over 90% of people globally speak only about 300 majority languages--the remaining 5700 languages being termed "minority languages". These languages represent the…

  18. Phonological Sketch of the Sida Language of Luang Namtha, Laos

    Directory of Open Access Journals (Sweden)

    Nathan Badenoch

    2017-07-01

    Full Text Available This paper describes the phonology of the Sida language, a Tibeto-Burman language spoken by approximately 3,900 people in Laos and Vietnam. The data presented here are the variety spoken in Luang Namtha province of northwestern Laos, and focuses on a synchronic description of the fundamentals of the Sida phonological systems. Several issues of diachronic interest are also discussed in the context of the diversity of the Southern Loloish group of languages, many of which are spoken in Laos and have not yet been described in detail.

  19. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  20. Infant sensitivity to speaker and language in learning a second label.

    Science.gov (United States)

    Bhagwat, Jui; Casasola, Marianella

    2014-02-01

    Two experiments examined when monolingual, English-learning 19-month-old infants learn a second object label. Two experimenters sat together. One labeled a novel object with one novel label, whereas the other labeled the same object with a different label in either the same or a different language. Infants were tested on their comprehension of each label immediately following its presentation. Infants mapped the first label at above chance levels, but they did so with the second label only when requested by the speaker who provided it (Experiment 1) or when the second experimenter labeled the object in a different language (Experiment 2). These results show that 19-month-olds learn second object labels but do not readily generalize them across speakers of the same language. The results highlight how speaker and language spoken guide infants' acceptance of second labels, supporting sociopragmatic views of word learning. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. A grammar of Abui : A Papuan language of Alor

    NARCIS (Netherlands)

    Kratochvil, František

    2007-01-01

    This work contains the first comprehensive description of Abui, a language of the Trans New Guinea family spoken approximately by 16,000 speakers in the central part of the Alor Island in Eastern Indonesia. The description focuses on the northern dialect of Abui as spoken in the village

  2. What can a geography as dancing body? language-experience 'gesture-movement-affection' (fragments

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Queiroz Filho

    2016-12-01

    Full Text Available Made of fragments, this paper proposes to think about relations and possible repercussions existing between language and experience from the perspective of some post-structuralist authors. I sought in reflection about body and dance a way to discuss this issue and at the same time, making a geography as something that produces in us affections. “What can a Geography as dancing body?” is beyond a question, an invitation, a proposition: a ballerina geography.

  3. Children and adolescents with migratory experience at risk in language learning and psychosocial adaptation contexts.

    OpenAIRE

    Figueiredo, Sandra; Silva, Carlos Fernandes da; Monteiro, Sara

    2007-01-01

    A compelling body of evidence shows a strong association between psychological, affective and learning variables, related also with the age and gender factors, which are involved in the language learning development process. Children and adolescents with migratory experience (direct/indirect) can develop behaviours at risk in their academic learning and psychosocial adaptation, according to several stressors as anxiety, low motivation, negative attitudes, within a stressed internal l...

  4. Simulation Experiment Description Markup Language (SED-ML Level 1 Version 3 (L1V3

    Directory of Open Access Journals (Sweden)

    Bergmann Frank T.

    2018-03-01

    Full Text Available The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML is an XML-based format that encodes, for a given simulation experiment, (i which models to use; (ii which modifications to apply to models before simulation; (iii which simulation procedures to run on each model; (iv how to post-process the data; and (v how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1 implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  5. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar

    2018-03-19

    The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  6. Speaking two languages with different number naming systems: What implications for magnitude judgments in bilinguals at different stages of language acquisition?

    Science.gov (United States)

    Van Rinsveld, Amandine; Schiltz, Christine; Landerl, Karin; Brunner, Martin; Ugen, Sonja

    2016-08-01

    Differences between languages in terms of number naming systems may lead to performance differences in number processing. The current study focused on differences concerning the order of decades and units in two-digit number words (i.e., unit-decade order in German but decade-unit order in French) and how they affect number magnitude judgments. Participants performed basic numerical tasks, namely two-digit number magnitude judgments, and we used the compatibility effect (Nuerk et al. in Cognition 82(1):B25-B33, 2001) as a hallmark of language influence on numbers. In the first part we aimed to understand the influence of language on compatibility effects in adults coming from German or French monolingual and German-French bilingual groups (Experiment 1). The second part examined how this language influence develops at different stages of language acquisition in individuals with increasing bilingual proficiency (Experiment 2). Language systematically influenced magnitude judgments such that: (a) The spoken language(s) modulated magnitude judgments presented as Arabic digits, and (b) bilinguals' progressive language mastery impacted magnitude judgments presented as number words. Taken together, the current results suggest that the order of decades and units in verbal numbers may qualitatively influence magnitude judgments in bilinguals and monolinguals, providing new insights into how number processing can be influenced by language(s).

  7. Bilinguals' Plausibility Judgments for Phrases with a Literal vs. Non-literal Meaning: The Influence of Language Brokering Experience

    Directory of Open Access Journals (Sweden)

    Belem G. López

    2017-09-01

    Full Text Available Previous work has shown that prior experience in language brokering (informal translation may facilitate the processing of meaning within and across language boundaries. The present investigation examined the influence of brokering on bilinguals' processing of two word collocations with either a literal or a figurative meaning in each language. Proficient Spanish-English bilinguals classified as brokers or non-brokers were asked to judge if adjective+noun phrases presented in each language made sense or not. Phrases with a literal meaning (e.g., stinging insect were interspersed with phrases with a figurative meaning (e.g., stinging insult and non-sensical phrases (e.g., stinging picnic. It was hypothesized that plausibility judgments would be facilitated for literal relative to figurative meanings in each language but that experience in language brokering would be associated with a more equivalent pattern of responding across languages. These predictions were confirmed. The findings add to the body of empirical work on individual differences in language processing in bilinguals associated with prior language brokering experience.

  8. Beyond Languages, beyond Modalities: Transforming the Study of Semiotic Repertoires

    Science.gov (United States)

    Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina

    2017-01-01

    This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…

  9. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    Science.gov (United States)

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  10. Teaching and Learning Sign Language as a “Foreign” Language ...

    African Journals Online (AJOL)

    In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...

  11. The Impact of Biculturalism on Language and Literacy Development: Teaching Chinese English Language Learners

    Science.gov (United States)

    Palmer, Barbara C.; Chen, Chia-I; Chang, Sara; Leclere, Judith T.

    2006-01-01

    According to the 2000 United States Census, Americans age five and older who speak a language other than English at home grew 47 percent over the preceding decade. This group accounts for slightly less than one in five Americans (17.9%). Among the minority languages spoken in the United States, Asian-language speakers, including Chinese and other…

  12. Spoken Grammar: Where Are We and Where Are We Going?

    Science.gov (United States)

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  13. Attention to spoken word planning: Chronometric and neuroimaging evidence

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,

  14. Self-Assessment of Japanese as a Second Language: The Role of Experiences in the Naturalistic Acquisition

    Science.gov (United States)

    Suzuki, Yuichi

    2015-01-01

    Self-assessment has been used to assess second language proficiency; however, as sources of measurement errors vary, they may threaten the validity and reliability of the tools. The present paper investigated the role of experiences in using Japanese as a second language in the naturalistic acquisition context on the accuracy of the…

  15. Teaching natural language to computers

    OpenAIRE

    Corneli, Joseph; Corneli, Miriam

    2016-01-01

    "Natural Language," whether spoken and attended to by humans, or processed and generated by computers, requires networked structures that reflect creative processes in semantic, syntactic, phonetic, linguistic, social, emotional, and cultural modules. Being able to produce novel and useful behavior following repeated practice gets to the root of both artificial intelligence and human language. This paper investigates the modalities involved in language-like applications that computers -- and ...

  16. Individual classroom experiences: a sociocultural comparison for understanding efl classroom language learning Individual classroom experiences: a sociocultural comparison for understanding efl classroom language learning

    Directory of Open Access Journals (Sweden)

    Laura Miccoli

    2008-04-01

    Full Text Available Este trabalho compara as experiências de sala de aula (ESA de duas universitárias na aprendizagem de língua inglesa. As ESA emergiram de entrevistas individuais, onde vídeos das aulas promoveram a reflexão. A análise revelou que experiências de natureza cognitiva, social ou afetiva influem diretamente no processo de aprendizagem e as que se referem ao contexto, à história, crenças e metas dos alunos influem indiretamente no mesmo. A singularidade de algumas experiências levou à sua categorização como ESA individuais (ESAI. Ao comparar as ESAI de duas informantes, a importância da análise sociocultural do processo de aprendizagem de sala de aula fica evidente. Concluiremos com uma defesa do valor da teoria sociocultural no estudo da aprendizagem de língua estrangeira em sala de aula e com a apresentação das implicações deste estudo para pesquisadores e professores. This paper compares the classroom experiences (CEs of two university students in their process of learning English as a foreign language (EFL. The CEs emerged from individual interviews, where classroom videos promoted reflection. The analysis revealed that cognitive, social and affective experiences directly influence the learning process and that those which refer to setting, learner’s personal background, beliefs and goal influence the learning process indirectly. The analysis also revealed the singularity of some of these CEs that led to their categorization as individual CEs (ICEs. When comparing the ICEs of the two participants, the importance of a sociocultural analysis of the classroom learning process becomes evident. We conclude with an analysis of the value of sociocultural theory in the study of classroom EFL learning and with the implications of this study for teachers and researchers.

  17. Functional connectivity in task-negative network of the Deaf: effects of sign language experience

    Directory of Open Access Journals (Sweden)

    Evie Malaia

    2014-06-01

    Full Text Available Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia. We report the first investigation of the task-negative network in Deaf signers and its functional connectivity—the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG, but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.

  18. I Feel You: The Design and Evaluation of a Domotic Affect-Sensitive Spoken Conversational Agent

    Directory of Open Access Journals (Sweden)

    Juan Manuel Montero

    2013-08-01

    Full Text Available We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO. In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA. NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users’ frustrations and, ultimately, improving their satisfaction.

  19. The socially weighted encoding of spoken words: a dual-route approach to speech perception.

    Science.gov (United States)

    Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B

    2013-01-01

    Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  20. Use of the PASKAL' language for programming in experiment automation systems

    International Nuclear Information System (INIS)

    Ostrovnoj, A.I.

    1985-01-01

    A complex of standard solutions intended for realization of the main functions is suggested; execution of these solutions is provided by any system for experiment automation. They include: recording and accumulation of experimental data; visualization and preliminary processing of incoming data, interaction with the operator and system control; data filing. It is advisable to use standard software, to represent data processing algorithms as parallel processes, to apply the PASCAL' language for programming. Programming using CAMAC equipment is provided by complex of procedures similar to the set of subprograms in the FORTRAN language. Utilization of a simple data file in accumulation and processing programs ensures unified representation of experimental data and uniform access to them on behalf of a large number of programs operating both on-line and off-line regimes. The suggested approach is realized when developing systems on the base of the SM-3, SM-4 and MERA-60 computers with RAFOS operating system

  1. Speaker Input Variability Does Not Explain Why Larger Populations Have Simpler Languages.

    Science.gov (United States)

    Atkinson, Mark; Kirby, Simon; Smith, Kenny

    2015-01-01

    A learner's linguistic input is more variable if it comes from a greater number of speakers. Higher speaker input variability has been shown to facilitate the acquisition of phonemic boundaries, since data drawn from multiple speakers provides more information about the distribution of phonemes in a speech community. It has also been proposed that speaker input variability may have a systematic influence on individual-level learning of morphology, which can in turn influence the group-level characteristics of a language. Languages spoken by larger groups of people have less complex morphology than those spoken in smaller communities. While a mechanism by which the number of speakers could have such an effect is yet to be convincingly identified, differences in speaker input variability, which is thought to be larger in larger groups, may provide an explanation. By hindering the acquisition, and hence faithful cross-generational transfer, of complex morphology, higher speaker input variability may result in structural simplification. We assess this claim in two experiments which investigate the effect of such variability on language learning, considering its influence on a learner's ability to segment a continuous speech stream and acquire a morphologically complex miniature language. We ultimately find no evidence to support the proposal that speaker input variability influences language learning and so cannot support the hypothesis that it explains how population size determines the structural properties of language.

  2. The abstract geometry modeling language (AgML): experience and road map toward eRHIC

    International Nuclear Information System (INIS)

    Webb, Jason; Lauret, Jerome; Perevoztchikov, Victor

    2014-01-01

    The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT 3 simulation application and our ROOT/TGeo based reconstruction software from a single source, which is demonstrably self- consistent. While AgML was developed primarily as a tool to migrate away from our legacy FORTRAN-era geometry codes, it also provides a rich syntax geared towards the rapid development of detector models. AgML has been successfully employed by users to quickly develop and integrate the descriptions of several new detectors in the RHIC/STAR experiment including the Forward GEM Tracker (FGT) and Heavy Flavor Tracker (HFT) upgrades installed in STAR for the 2012 and 2013 runs. AgML has furthermore been heavily utilized to study future upgrades to the STAR detector as it prepares for the eRHIC era. With its track record of practical use in a live experiment in mind, we present the status, lessons learned and future of the AgML language as well as our experience in bringing the code into our production and development environments. We will discuss the path toward eRHIC and pushing the current model to accommodate for detector miss-alignment and high precision physics.

  3. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  4. Looking at anything that is green when hearing ‘frog’: How object surface colour and stored object colour knowledge influence language-mediated overt attention

    OpenAIRE

    Huettig, F.; Altmann, G.

    2011-01-01

    Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of...

  5. The Peculiarities of the Adverbs Functioning of the Dialect Spoken in the v. Shevchenkove, Kiliya district, Odessa Region

    Directory of Open Access Journals (Sweden)

    Maryna Delyusto

    2013-08-01

    Full Text Available The article gives new evidence about the adverb as a part of the grammatical system of the Ukrainian steppe dialect spread in the area between the Danube and the Dniester rivers. The author proves that the grammatical system of the dialect spoken in the v. Shevchenkove, Kiliya district, Odessa region is determined by the historical development of the Ukrainian language rather than the influence of neighboring dialects.

  6. language choice, code-switching and code- mixing in biase

    African Journals Online (AJOL)

    Ada

    Finance and Economic Planning, Cross River and Akwa ... See Table 1. Table 1: Indigenous Languages Spoken in Biase ... used in education, in business, in religion, in the media ... far back as the seventeenth (17th) century (King. 1844).

  7. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  8. Effects of Music and Tonal Language Experience on Relative Pitch Performance.

    Science.gov (United States)

    Ngo, Mary Kim; Vu, Kim-Phuong L; Strybel, Thomas Z

    2016-01-01

    We examined the interaction between music and tone language experience as related to relative pitch processing by having participants judge the direction and magnitude of pitch changes in a relative pitch task. Participants' performance on this relative pitch task was assessed using the Cochran-Weiss-Shanteau (CWS) index of expertise, based on a ratio of discrimination over consistency in participants' relative pitch judgments. Testing took place in 2 separate sessions on different days to assess the effects of practice on participants' performance. Participants also completed the Montreal Battery of Evaluation of Amusia (MBEA), an existing measure comprising subtests aimed at evaluating relative pitch processing abilities. Musicians outperformed nonmusicians on both the relative pitch task, as measured by the CWS index, and the MBEA, but tonal language speakers outperformed non-tonal language speakers only on the MBEA. A closer look at the discrimination and consistency component scores of the CWS index revealed that musicians were better at discriminating different pitches and more consistent in their assessments of the direction and magnitude of relative pitch change.

  9. The language of football

    DEFF Research Database (Denmark)

    Rossing, Niels Nygaard; Skrubbeltrang, Lotte Stausgaard

    2014-01-01

    levels (Schein, 2004) in which each player and his actions can be considered an artefact - a concrete symbol in motion embedded in espoused values and basic assumptions. Therefore, the actions of each dialect are strongly connected to the underlying understanding of football. By document and video......The language of football: A cultural analysis of selected World Cup nations. This essay describes how actions on the football field relate to the nations’ different cultural understanding of football and how these actions become spoken dialects within a language of football. Saussure reasoned...... language to have two components: a language system and language users (Danesi, 2003). Consequently, football can be characterized as a language containing a system with specific rules of the game and users with actual choices and actions within the game. All football players can be considered language...

  10. Discourse context and the recognition of reduced and canonical spoken words

    OpenAIRE

    Brouwer, S.; Mitterer, H.; Huettig, F.

    2013-01-01

    In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words. Target word recognition was assessed by measuring eye fixation proportions to four printed words on a visual display: the target, a "reduced form" ...

  11. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    OpenAIRE

    Jesse, A.; McQueen, J.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker...

  12. The Road to Language Learning Is Not Entirely Iconic: Iconicity, Neighborhood Density, and Frequency Facilitate Acquisition of Sign Language.

    Science.gov (United States)

    Caselli, Naomi K; Pyers, Jennie E

    2017-07-01

    Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.

  13. "We communicated that way for a reason": language practices and language ideologies among hearing adults whose parents are deaf.

    Science.gov (United States)

    Pizer, Ginger; Walters, Keith; Meier, Richard P

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."

  14. Comparison of Word Intelligibility in Spoken and Sung Phrases

    Directory of Open Access Journals (Sweden)

    Lauren B. Collister

    2008-09-01

    Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.

  15. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Germanic heritage languages in North America: Acquisition, attrition and change

    OpenAIRE

    Johannessen, Janne Bondi; Salmons, Joseph C.; Westergaard, Marit; Anderssen, Merete; Arnbjörnsdóttir, Birna; Allen, Brent; Pierce, Marc; Boas, Hans C.; Roesch, Karen; Brown, Joshua R.; Putnam, Michael; Åfarli, Tor A.; Newman, Zelda Kahan; Annear, Lucas; Speth, Kristin

    2015-01-01

    This book presents new empirical findings about Germanic heritage varieties spoken in North America: Dutch, German, Pennsylvania Dutch, Icelandic, Norwegian, Swedish, West Frisian and Yiddish, and varieties of English spoken both by heritage speakers and in communities after language shift. The volume focuses on three critical issues underlying the notion of ‘heritage language’: acquisition, attrition and change. The book offers theoretically-informed discussions of heritage language processe...

  17. Moving conceptualizations of language and literacy in SLA

    DEFF Research Database (Denmark)

    Laursen, Helle Pia

    in various technological environments, we see an increase in scholarship that highlights the mixing and chaining of spoken, written and visual modalities and how written and visual often precede or overrule spoken language. There seems to be a mismatch between current day language practices......, in language education and in language practices. As a consequence of this and in the light of the increasing mobility and linguistic diversity in Europe, in this colloquium, we address the need for a (re)conceptualization of the relation between language and literacy. Drawing on data from different settings...

  18. Use of Spoken and Written Japanese Did Not Protect Japanese-American Men From Cognitive Decline in Late Life

    Science.gov (United States)

    Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-01-01

    Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282

  19. The foreign language effect on the self-serving bias: A field experiment in the high school classroom.

    Science.gov (United States)

    van Hugten, Joeri; van Witteloostuijn, Arjen

    2018-01-01

    The rise of bilingual education triggers an important question: which language is preferred for a particular school activity? Our field experiment (n = 120) shows that students (aged 13-15) who process feedback in non-native English have greater self-serving bias than students who process feedback in their native Dutch. By contrast, literature on the foreign-language emotionality effect suggests a weaker self-serving bias in the non-native language, so our result adds nuance to that literature. The result is important to schools as it suggests that teachers may be able to reduce students' defensiveness and demotivation by communicating negative feedback in the native language, and teachers may be able to increase students' confidence and motivation by communicating positive feedback in the foreign language.

  20. Speech, gesture and the origins of language

    NARCIS (Netherlands)

    Levelt, W.J.M.

    2004-01-01

    During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt’s monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in

  1. Iconic Factors and Language Word Order

    Science.gov (United States)

    Moeser, Shannon Dawn

    1975-01-01

    College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)

  2. Language & Culture in English as a Foreign Language Teaching: a socio-cultural experience of some exchange students from Piauí Federal Institute

    Directory of Open Access Journals (Sweden)

    Giselda dos Santos Costa

    2018-04-01

    Full Text Available The internationalization of higher education has been dramatically intensified over the last fifteen years in Brazil, creating wide-ranging opportunities as well as threats and limitations in relation to foreign language teaching practices and the teaching of culture. Many linguists and anthropologists (BYRAM, 1997; KRAMSCH, 1993; MCKAY, 2003; JENKINS, 2005 have stated that for communication to be successful the use of language must be associated with other culturally appropriated behavior, not just linguistic rules in the strict sense. In this article, we discuss the problems related to internationalization, more specifically, the discussion revolves around the sociocultural challenges faced by some students of the Federal Institute of Piauí (IFPI regarding their experiences in the Science without Borders program spread through five countries. By using qualitative interviews, the results revealed that students had sociocultural problems which could be avoided if English teachers had worked in the language classroom before the execution of the exchange program.

  3. ASSESSING THE SO CALLED MARKED INFLECTIONAL FEATURES OF NIGERIAN ENGLISH: A SECOND LANGUAGE ACQUISITION THEORY ACCOUNT

    OpenAIRE

    Boluwaji Oshodi

    2014-01-01

    There are conflicting claims among scholars on whether the structural outputs of the types of English spoken in countries where English is used as a second language gives such speech forms the status of varieties of English. This study examined those morphological features considered to be marked features of the variety spoken in Nigeria according to Kirkpatrick (2011) and the variety spoken in Malaysia by considering the claims of the Missing Surface Inflection Hypothesis (MSIH) a Second Lan...

  4. Language-driven anticipatory eye movements in virtual reality.

    Science.gov (United States)

    Eichert, Nicole; Peeters, David; Hagoort, Peter

    2018-06-01

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

  5. Processing spoken lectures in resource-scarce environments

    CSIR Research Space (South Africa)

    Van Heerden, CJ

    2011-11-01

    Full Text Available and then adapting or training new models using the segmented spoken lectures. The eventual systems perform quite well, aligning more than 90% of a selected set of target words successfully....

  6. Autosegmental Representation of Epenthesis in the Spoken French ...

    African Journals Online (AJOL)

    Nneka Umera-Okeke

    ... spoken French of IUFLs. Key words: IUFLs, Epenthensis, Ijebu dialect, Autosegmental phonology .... Ambiguities may result: salmi "strait" vs. salami. (An exception is that in .... tiers of segments. In the picture given us by classical generative.

  7. Aphasia, an acquired language disorder

    African Journals Online (AJOL)

    2009-10-11

    Oct 11, 2009 ... In this article we will review the types of aphasia, an approach to its diagnosis, aphasia subtypes, rehabilitation and prognosis. ... language processing in both the written and spoken forms.6 ... The angular gyrus (Brodman area 39) is located at the .... of his or her quality of life, emotional state, sense of well-.

  8. Prediction of Audience Response from Spoken Sequences, Speech Pauses and Co-speech Gestures in Humorous Discourse by Barack Obama

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2017-01-01

    president mocks himself, his collaborators, political adversary and the press corps making the audience react with cheers, laughter and/or applause. The results of the prediction experiment demonstrate that information about spoken sequences, pauses and co-speech gestures by Obama can be used to predict...

  9. Creative Realization of the Interdisciplinary Approach to the Study of Foreign Languages in the Experience of the Danube Basin

    Directory of Open Access Journals (Sweden)

    Olga Demchenko

    2013-08-01

    Full Text Available The article provides a brief overview of the experience of the Danube Basin universities, with an emphasis on the importance of studying Maritime English as a language of international communication. The phenomenon of “interdisciplinarity” and dominants of the interdisciplinary approach to the study of a foreign language is considered. Due to the need to train competent professionals for the maritime fleet with knowledge of professionally oriented foreign language and the lack of theoretically and practically developed technique with purposeful development of communicative competence with the use of interdisciplinary approach to foreign language learning, some practical experience of the use of interdisciplinary approach as an innovative technology in teaching Maritime English is presented.

  10. Cohesion as interaction in ELF spoken discourse

    Directory of Open Access Journals (Sweden)

    T. Christiansen

    2013-10-01

    Full Text Available Hitherto, most research into cohesion has concentrated on texts (usually written only in standard Native Speaker English – e.g. Halliday and Hasan (1976. By contrast, following on the work in anaphora of such scholars as Reinhart (1983 and Cornish (1999, Christiansen (2011 describes cohesion as an interac­tive process focusing on the link between text cohesion and discourse coherence. Such a consideration of cohesion from the perspective of discourse (i.e. the process of which text is the product -- Widdowson 1984, p. 100 is especially relevant within a lingua franca context as the issue of different variations of ELF and inter-cultural concerns (Guido 2008 add extra dimensions to the complex multi-code interaction. In this case study, six extracts of transcripts (approximately 1000 words each, taken from the VOICE corpus (2011 of conference question and answer sessions (spoken interaction set in multicultural university con­texts are analysed in depth by means of a qualitative method.

  11. Talker and background noise specificity in spoken word recognition memory

    OpenAIRE

    Cooper, Angela; Bradlow, Ann R.

    2017-01-01

    Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...

  12. Spectrotemporal processing drives fast access to memory traces for spoken words.

    Science.gov (United States)

    Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C

    2012-05-01

    The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Experiences, Perceptions and Attitudes on ICT Integration: A Case Study among Novice and Experienced Language Teachers in the Philippines

    Science.gov (United States)

    Dela Rosa, John Paul Obillos

    2016-01-01

    The influence of Information and Communication Technology (ICT) in developing ways on how to better deliver instruction has been regarded as beneficial in education. In language teaching, the use of ICT is an impactful experience. It is therefore the purpose of this study to delve into the experiences, perceptions and attitudes of a novice and an…

  14. Adult English Language Learners Constructing and Sharing Their Stories and Experiences: The Cultural and Linguistic Autobiography Writing Project

    Science.gov (United States)

    Park, Gloria

    2011-01-01

    This article is the culmination of the Cultural and Linguistic Autobiography (CLA) writing project, which details narrative descriptions of adult English language learners' (ELLs') cultural and linguistic experiences and how those experiences may have influenced the ways in which these learners constructed and reconstructed their identities.…

  15. Language Learners Perceptions and Experiences on the Use of Mobile Applications for Independent Language Learning in Higher Education

    Science.gov (United States)

    Niño, Ana

    2015-01-01

    With the widespread use of mobile phones and portable devices it is inevitable to think of Mobile Assisted Language Learning as a means of independent learning in Higher Education. Nowadays many learners are keen to explore the wide variety of applications available in their portable and always readily available mobile phones and tablets. The fact…

  16. Speech-language pathologists' assessment and intervention practices with multilingual children.

    Science.gov (United States)

    Williams, Corinne J; McLeod, Sharynne

    2012-06-01

    Within predominantly English-speaking countries such as the US, UK, Canada, New Zealand, and Australia, there are a significant number of people who speak languages other than English. This study aimed to examine Australian speech-language pathologists' (SLPs) perspectives and experiences of multilingualism, including their assessment and intervention practices, and service delivery methods when working with children who speak languages other than English. A questionnaire was completed by 128 SLPs who attended an SLP seminar about cultural and linguistic diversity. Approximately one half of the SLPs (48.4%) reported that they had at least minimal competence in a language(s) other than English; but only 12 (9.4%) reported that they were proficient in another language. The SLPs spoke a total of 28 languages other than English, the most common being French, Italian, German, Spanish, Mandarin, and Auslan (Australian sign language). Participants reported that they had, in the past 12 months, worked with a mean of 59.2 (range 1-100) children from multilingual backgrounds. These children were reported to speak between two and five languages each; the most common being: Vietnamese, Arabic, Cantonese, Mandarin, Australian Indigenous languages, Tagalog, Greek, and other Chinese languages. There was limited overlap between the languages spoken by the SLPs and the children on the SLPs' caseloads. Many of the SLPs assessed children's speech (50.5%) and/or language (34.2%) without assistance from others (including interpreters). English was the primary language used during assessments and intervention. The majority of SLPs always used informal speech (76.7%) and language (78.2%) assessments and, if standardized tests were used, typically they were in English. The SLPs sought additional information about the children's languages and cultural backgrounds, but indicated that they had limited resources to discriminate between speech and language difference vs disorder.

  17. The role of grammatical category information in spoken word retrieval.

    Science.gov (United States)

    Duràn, Carolina Palma; Pillon, Agnesa

    2011-01-01

    We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production.

  18. Making a Difference: Language Teaching for Intercultural and International Dialogue

    Science.gov (United States)

    Byram, Michael; Wagner, Manuela

    2018-01-01

    Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…

  19. Australian Aboriginal Deaf People and Aboriginal Sign Language

    Science.gov (United States)

    Power, Des

    2013-01-01

    Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…

  20. Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy

    2016-01-01

    Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…

  1. How Facebook Can Revitalise Local Languages: Lessons from Bali

    Science.gov (United States)

    Stern, Alissa Joy

    2017-01-01

    For a language to survive, it must be spoken and passed down to the next generation. But how can we engage teenagers--so crucial for language transmission--to use and value their local tongue when they are bombarded by pressures from outside and from within their society to only speak national and international languages? This paper analyses the…

  2. El Espanol como Idioma Universal (Spanish as a Universal Language)

    Science.gov (United States)

    Mijares, Jose

    1977-01-01

    A proposal to transform Spanish into a universal language because it possesses the prerequisites: it is a living language, spoken in several countries; it is a natural language; and it uses the ordinary alphabet. Details on simplification and standardization are given. (Text is in Spanish.) (AMH)

  3. Language Planning for Venezuela: The Role of English.

    Science.gov (United States)

    Kelsey, Irving; Serrano, Jose

    A rationale for teaching foreign languages in Venezuelan schools is discussed. An included sociolinguistic profile of Venezuela indicates that Spanish is the sole language of internal communication needs. Other languages spoken in Venezuela serve primarily a group function among the immigrant and indigenous communities. However, the teaching of…

  4. Dilemmatic Aspects of Language Policies in a Trilingual Preschool Group

    Science.gov (United States)

    Puskás, Tünde; Björk-Willén, Polly

    2017-01-01

    This article explores dilemmatic aspects of language policies in a preschool group in which three languages (Swedish, Romani and Arabic) are spoken on an everyday basis. The article highlights the interplay between policy decisions on the societal level, the teachers' interpretations of these policies, as well as language practices on the micro…

  5. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

    Directory of Open Access Journals (Sweden)

    Dhana Wolf

    2017-11-01

    Full Text Available Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake or less so (e.g., self-grooming. We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area and the posterior superior temporal gyrus (pSTG, Wernicke's area and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC in fMRI even without involving a stimulus (model-free analysis. The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations. Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.

  6. Sign Language Interpreting in Theatre: Using the Human Body to Create Pictures of the Human Soul

    Directory of Open Access Journals (Sweden)

    Michael Richardson

    2017-06-01

    Full Text Available This paper explores theatrical interpreting for Deaf spectators, a specialism that both blurs the separation between translation and interpreting, and replaces these potentials with a paradigm in which the translator's body is central to the production of the target text. Meaningful written translations of dramatic texts into sign language are not currently possible. For Deaf people to access Shakespeare or Moliere in their own language usually means attending a sign language interpreted performance, a typically disappointing experience that fails to provide accessibility or to fulfil the potential of a dynamically equivalent theatrical translation. I argue that when such interpreting events fail, significant contributory factors are the challenges involved in producing such a target text and the insufficient embodiment of that text. The second of these factors suggests that the existing conference and community models of interpreting are insufficient in describing theatrical interpreting. I propose that a model drawn from Theatre Studies, namely psychophysical acting, might be more effective for conceptualising theatrical interpreting. I also draw on theories from neurological research into the Mirror Neuron System to suggest that a highly visual and physical approach to performance (be that by actors or interpreters is more effective in building a strong actor-spectator interaction than a performance in which meaning is conveyed by spoken words. Arguably this difference in language impact between signed and spoken is irrelevant to hearing audiences attending spoken language plays, but I suggest that for all theatre translators the implications are significant: it is not enough to create a literary translation as the target text; it is also essential to produce a text that suggests physicality. The aim should be the creation of a text which demands full expression through the body, the best picture of the human soul and the fundamental medium

  7. Uncovering young children's emerging identities related to their literacy experiences: Suggestions to strengthen language education

    Directory of Open Access Journals (Sweden)

    Moen, Melanie Carmen

    2015-12-01

    Full Text Available The study explored how young children’s identities emerged from their drawings and accounts of their favourite stories as we argue the importance of understanding children in the context of school and language education. Sixty-six (n=66 children of two urban schools in Pretoria, South Africa were asked to write about and draw their favourite story. The participants were between the ages of six and seven years. Vygotsky’s socio-cultural theory and Chen’s theory of the construction of identity in a social context were used as conceptual framework. This conceptual framework could be linked to the findings which suggested that the children related their drawings and versions of their favourite stories to their interpretations of their life worlds. The prominent themes from the data could be associated with the self, the family, familiar objects and known animals. Their literacy experiences and the socio-cultural influences on the children’s construction of their identities were apparent in their work. We argue that teachers need to better understand how children understand themselves in relation to the world around them when making decisions about effective language education.

  8. The acculturation, language and learning experiences of international nursing students: Implications for nursing education.

    Science.gov (United States)

    Mitchell, Creina; Del Fabbro, Letitia; Shaw, Julie

    2017-09-01

    International or foreign students are those who enrol in universities outside their country of citizenship. They face many challenges acculturating to and learning in a new country and education system, particularly if they study in an additional language. This qualitative inquiry aimed to explore the learning and acculturating experiences of international nursing students to identify opportunities for teaching innovation to optimise the experiences and learning of international nursing students. Undergraduate and postgraduate international nursing students were recruited from one campus of an Australian university to take part in semi-structured interviews. A purposive and theoretically saturated sample of 17 students was obtained. Interviews were audio-recorded and field notes and interview data were thematically analysed. Expressing myself and Finding my place were the two major themes identified from the international student data. International nursing students identified that it took them longer to study in comparison with domestic students and that stress negatively influenced communication, particularly in the clinical setting. Additionally international nursing students identified the need to find supportive opportunities to speak English to develop proficiency. Clinical placement presented the opportunity to speak English and raised the risk of being identified as lacking language proficiency or being clinically unsafe. Initially, international nursing students felt isolated and it was some time before they found their feet. In this time, they experienced otherness and discrimination. International nursing students need a safe place to learn so they can adjust and thrive in the university learning community. Faculty and clinical educators must be culturally competent; they need to understand international nursing students' needs and be willing and able to advocate for and create an equitable environment that is appropriate for international nursing

  9. Effects of language experience and expectations on attention to consonants and tones in English and Mandarin Chinese.

    Science.gov (United States)

    Lin, Mengxi; Francis, Alexander L

    2014-11-01

    Both long-term native language experience and immediate linguistic expectations can affect listeners' use of acoustic information when making a phonetic decision. In this study, a Garner selective attention task was used to investigate differences in attention to consonants and tones by American English-speaking listeners (N = 20) and Mandarin Chinese-speaking listeners hearing speech in either American English (N = 17) or Mandarin Chinese (N = 20). To minimize the effects of lexical differences and differences in the linguistic status of pitch across the two languages, stimuli and response conditions were selected such that all tokens constitute legitimate words in both languages and all responses required listeners to make decisions that were linguistically meaningful in their native language. Results showed that regardless of ambient language, Chinese listeners processed consonant and tone in a combined manner, consistent with previous research. In contrast, English listeners treated tones and consonants as perceptually separable. Results are discussed in terms of the role of sub-phonemic differences in acoustic cues across language, and the linguistic status of consonants and pitch contours in the two languages.

  10. Textese and use of texting by children with typical language development and Specific Language Impairment

    NARCIS (Netherlands)

    Blom, E.; van Dijk, C.; Vasić, N.; van Witteloostuijn, M.; Avrutin, S.

    The purpose of this study was to investigate texting and textese, which is the special register used for sending brief text messages, across children with typical development (TD) and children with Specific Language Impairment (SLI). Using elicitation techniques, texting and spoken language messages

  11. Textese and use of texting by children with typical language development and Specific Language Impairment

    NARCIS (Netherlands)

    Blom, W.B.T.; van Dijk, Chantal; Vasic, Nada; van Witteloostuijn, Merel; Avrutin, S.

    2017-01-01

    The purpose of this study was to investigate texting and textese, which is the special register used for sending brief text messages, across children with typical development (TD) and children with Specific Language Impairment (SLI). Using elicitation techniques, texting and spoken language messages

  12. First Steps to Endangered Language Documentation: The Kalasha Language, a Case Study

    Science.gov (United States)

    Mela-Athanasopoulou, Elizabeth

    2011-01-01

    The present paper based on extensive fieldwork D conducted on Kalasha, an endangered language spoken in the three small valleys in Chitral District of Northwestern Pakistan, exposes a spontaneous dialogue-based elicitation of linguistic material used for the description and documentation of the language. After a brief display of the basic typology…

  13. The experiences of students with English as a second language in a baccalaureate nursing program.

    Science.gov (United States)

    Sanner, Susan; Wilson, Astrid

    2008-10-01

    Teaching nursing students with English as a second language (ESL) can be a challenge for nursing faculty in many English speaking countries. This qualitative study purported to answer the research question, "How do students with ESL describe their experiences in a nursing program"? to develop a better understanding of the reasons for their course failure. Seidman's Model of in-depth interviewing (1998) consisting of three successive interviews with the same participant was used. The first interview focused on the students' life histories, the second allowed the participants to reconstruct the details of their experiences, and the third encouraged the students to reflect on the meaning of their experiences. Three themes emerged, "walking the straight and narrow", "an outsider looking in", and "doing whatever it takes to be successful." Although each participant shared instances where ESL may have contributed to his/her academic difficulty, the participants did not perceive that ESL was the primary reason for course failure, but attributed it to the discrimination and stereotyping they experienced. In spite of the discrimination and stereotyping, participants reported a strong desire to persist in the nursing program. Findings from this study provided an in-depth understanding of the perceptions of three nursing students with ESL. Also, the findings are applicable to nursing faculty in that a better understanding of students with ESL can enhance their learning.

  14. Grammar of Kove: An Austronesian Language of the West New Britain Province, Papua New Guinea

    Science.gov (United States)

    Sato, Hiroko

    2013-01-01

    This dissertation is a descriptive grammar of Kove, an Austronesian language spoken in the West New Britain Province of Papua New Guinea. Kove is primarily spoken in 18 villages, including some on the small islands north of New Britain. There are about 9,000 people living in the area, but many are not fluent speakers of Kove. The dissertation…

  15. On the Conventionalization of Mouth Actions in Australian Sign Language.

    Science.gov (United States)

    Johnston, Trevor; van Roekel, Jane; Schembri, Adam

    2016-03-01

    This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.

  16. Advances in natural language processing.

    Science.gov (United States)

    Hirschberg, Julia; Manning, Christopher D

    2015-07-17

    Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.

  17. The comprehension skills of children learning English as an additional language.

    Science.gov (United States)

    Burgoyne, K; Kelly, J M; Whiteley, H E; Spooner, A

    2009-12-01

    Data from national test results suggests that children who are learning English as an additional language (EAL) experience relatively lower levels of educational attainment in comparison to their monolingual, English-speaking peers. The relative underachievement of children who are learning EAL demands that the literacy needs of this group are identified. To this end, this study aimed to explore the reading- and comprehension-related skills of a group of EAL learners. Data are reported from 92 Year 3 pupils, of whom 46 children are learning EAL. Children completed standardized measures of reading accuracy and comprehension, listening comprehension, and receptive and expressive vocabulary. Results indicate that many EAL learners experience difficulties in understanding written and spoken text. These comprehension difficulties are not related to decoding problems but are related to significantly lower levels of vocabulary knowledge experienced by this group. Many EAL learners experience significantly lower levels of English vocabulary knowledge which has a significant impact on their ability to understand written and spoken text. Greater emphasis on language development is therefore needed in the school curriculum to attempt to address the limited language skills of children learning EAL.

  18. Functional changes in people with different hearing status and experiences of using Chinese sign language: an fMRI study.

    Science.gov (United States)

    Li, Qiang; Xia, Shuang; Zhao, Fei; Qi, Ji

    2014-01-01

    The purpose of this study was to assess functional changes in the cerebral cortex in people with different sign language experience and hearing status whilst observing and imitating Chinese Sign Language (CSL) using functional magnetic resonance imaging (fMRI). 50 participants took part in the study, and were divided into four groups according to their hearing status and experience of using sign language: prelingual deafness signer group (PDS), normal hearing non-signer group (HnS), native signer group with normal hearing (HNS), and acquired signer group with normal hearing (HLS). fMRI images were scanned from all subjects when they performed block-designed tasks that involved observing and imitating sign language stimuli. Nine activation areas were found in response to undertaking either observation or imitation CSL tasks and three activated areas were found only when undertaking the imitation task. Of those, the PDS group had significantly greater activation areas in terms of the cluster size of the activated voxels in the bilateral superior parietal lobule, cuneate lobe and lingual gyrus in response to undertaking either the observation or the imitation CSL task than the HnS, HNS and HLS groups. The PDS group also showed significantly greater activation in the bilateral inferior frontal gyrus which was also found in the HNS or the HLS groups but not in the HnS group. This indicates that deaf signers have better sign language proficiency, because they engage more actively with the phonetic and semantic elements. In addition, the activations of the bilateral superior temporal gyrus and inferior parietal lobule were only found in the PDS group and HNS group, and not in the other two groups, which indicates that the area for sign language processing appears to be sensitive to the age of language acquisition. After reading this article, readers will be able to: discuss the relationship between sign language and its neural mechanisms. Copyright © 2014 Elsevier Inc

  19. Effects of Early Bilingual Experience with a Tone and a Non-Tone Language on Speech-Music Integration.

    Directory of Open Access Journals (Sweden)

    Salomi S Asaridou

    Full Text Available We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch. We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval or phonologically (based on the identity of the sung vowel. We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.

  20. An experience in Language Teaching Seminar of Primary Education Degree through the Seventh Art

    Directory of Open Access Journals (Sweden)

    Ángela GARCÍA-MANSO

    2018-01-01

    Full Text Available This study describes the Seminar «Language Skills and Seventh Art» developed at the University of Extremadura in the course 2015-2016. Through the analysis of ten films, we deal with professional competences of future Primary teachers from unique situations, for example disabilities such as blind and deaf people, autism or dyslexia, questions about the origin of the language and artificial languages, or cultural issues such as the wild child or within situations of isolation or loneliness. In addition to the specific considerations of each film, the active use of Cinema in different areas of learning foreign languages and ELE (Spanish as Foreign Language is postulated

  1. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    Science.gov (United States)

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  3. Language Shift or Increased Bilingualism in South Africa: Evidence from Census Data

    Science.gov (United States)

    Posel, Dorrit; Zeller, Jochen

    2016-01-01

    In the post-apartheid era, South Africa has adopted a language policy that gives official status to 11 languages (English, Afrikaans, and nine Bantu languages). However, English has remained the dominant language of business, public office, and education, and some research suggests that English is increasingly being spoken in domestic settings.…

  4. Key Data on Teaching Languages at School in Europe. 2017 Edition. Eurydice Report

    Science.gov (United States)

    Baïdak, Nathalie; Balcon, Marie-Pascale; Motiejunaite, Akvile

    2017-01-01

    Linguistic diversity is part of Europe's DNA. It embraces not only the official languages of Member States, but also the regional and/or minority languages spoken for centuries on European territory, as well as the languages brought by the various waves of migrants. The coexistence of this variety of languages constitutes an asset, but it is also…

  5. Mapudungun According to Its Speakers: Mapuche Intellectuals and the Influence of Standard Language Ideology

    Science.gov (United States)

    Lagos, Cristián; Espinoza, Marco; Rojas, Darío

    2013-01-01

    In this paper, we analyse the cultural models (or folk theory of language) that the Mapuche intellectual elite have about Mapudungun, the native language of the Mapuche people still spoken today in Chile as the major minority language. Our theoretical frame is folk linguistics and studies of language ideology, but we have also taken an applied…

  6. Working with the Bilingual Child Who Has a Language Delay. Meeting Learning Challenges

    Science.gov (United States)

    Greenspan, Stanley I.

    2005-01-01

    It is very important to determine if a bilingual child's language delay is simply in English or also in the child's native language. Understandably, many children have higher levels of language development in the language spoken at home. To discover if this is the case, observe the child talking with his parents. Sometimes, even without…

  7. Language, games and the role of interpreters in psychiatric diagnosis: a Wittgensteinian thought experiment.

    Science.gov (United States)

    Thomas, P; Shah, A; Thornton, T

    2009-06-01

    British society is becoming increasingly culturally and linguistically diverse. This poses a major challenge to mental health services charged with the responsibility to work in ways that respect cultural and linguistic difference. In this paper we investigate the problems of interpretation in the diagnosis of depression using a thought experiment to demonstrate important features of language-games, an idea introduced by Ludwig Wittgenstein in his late work, Philosophical investigations. The thought experiment draws attention to the importance of culture and contexts in understanding the meaning of particular utterances. This has implications not only for how we understand the role of interpreters in clinical settings, and who might best be suited to function in such a role, but more generally it draws attention to the importance of involving members of black minority ethnic (BME) communities in working alongside mainstream mental health services. We conclude that the involvement of BME community development workers inside, alongside and outside statutory services can potentially improve the quality of care for people from BME communities who use these services.

  8. Language and the origin of numerical concepts.

    Science.gov (United States)

    Gelman, Rochel; Gallistel, C R

    2004-10-15

    Reports of research with the Pirahã and Mundurukú Amazonian Indians of Brazil lend themselves to discussions of the role of language in the origin of numerical concepts. The research findings indicate that, whether or not humans have an extensive counting list, they share with nonverbal animals a language-independent representation of number, with limited, scale-invariant precision. What causal role, then, does knowledge of the language of counting serve? We consider the strong Whorfian proposal, that of linguistic determinism; the weak Whorfian hypothesis, that language influences how we think; and that the "language of thought" maps to spoken language or symbol systems.

  9. Development of brain networks involved in spoken word processing of Mandarin Chinese.

    Science.gov (United States)

    Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R

    2011-08-01

    Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.

  10. Early Sign Language Exposure and Cochlear Implantation Benefits.

    Science.gov (United States)

    Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S

    2017-07-01

    Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.

  11. Language Brokering and Self-Concept: An Exploratory Study of Latino Students' Experiences in Middle and High School

    Science.gov (United States)

    Niehaus, Kate; Kumpiene, Gerda

    2014-01-01

    This exploratory study examined the relationships among individual characteristics, language brokering experiences and attitudes, and multiple dimensions of self-concept among a sample of Latino adolescents. The sample was comprised of 66 Latino students in 6th through 11th grades who were proficient in both Spanish and English. Results from…

  12. Pre-Service and In-Service Teachers' Experiences of Learning to Program in an Object-Oriented Language

    Science.gov (United States)

    Govender, I.; Grayson, D. J.

    2008-01-01

    This paper presents the results of an investigation into the various ways in which pre-service and in-service teachers experience learning to program in an object-oriented language. Both groups of teachers were enrolled in university courses. In most cases, the pre-service teachers were learning to program for the first time, while the in-service…

  13. Modeling Longitudinal Changes in Older Adults’ Memory for Spoken Discourse: Findings from the ACTIVE Cohort

    Science.gov (United States)

    Payne, Brennan R.; Gross, Alden L.; Parisi, Jeanine M.; Sisco, Shannon M.; Stine-Morrow, Elizabeth A. L.; Marsiske, Michael; Rebok, George W.

    2014-01-01

    Episodic memory shows substantial declines with advancing age, but research on longitudinal trajectories of spoken discourse memory (SDM) in older adulthood is limited. Using parallel process latent growth curve models, we examined 10 years of longitudinal data from the no-contact control group (N = 698) of the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) randomized controlled trial in order to test (a) the degree to which SDM declines with advancing age, (b) predictors of these age-related declines, and (c) the within-person relationship between longitudinal changes in SDM and longitudinal changes in fluid reasoning and verbal ability over 10 years, independent of age. Individuals who were younger, White, had more years of formal education, were male, and had better global cognitive function and episodic memory performance at baseline demonstrated greater levels of SDM on average. However, only age at baseline uniquely predicted longitudinal changes in SDM, such that declines accelerated with greater age. Independent of age, within-person decline in reasoning ability over the 10-year study period was substantially correlated with decline in SDM (r = .87). An analogous association with SDM did not hold for verbal ability. The findings suggest that longitudinal declines in fluid cognition are associated with reduced spoken language comprehension. Unlike findings from memory for written prose, preserved verbal ability may not protect against developmental declines in memory for speech. PMID:24304364

  14. METONYMY BASED ON CULTURAL BACKGROUND KNOWLEDGE AND PRAGMATIC INFERENCING: EVIDENCE FROM SPOKEN DISCOURSE

    Directory of Open Access Journals (Sweden)

    Arijana Krišković

    2009-01-01

    Full Text Available Th e characterization of metonymy as a conceptual tool for guiding inferencing in language has opened a new fi eld of study in cognitive linguistics and pragmatics. To appreciate the value of metonymy for pragmatic inferencing, metonymy should not be viewed as performing only its prototypical referential function. Metonymic mappings are operative in speech acts at the level of reference, predication, proposition and illocution. Th e aim of this paper is to study the role of metonymy in pragmatic inferencing in spoken discourse in televison interviews. Case analyses of authentic utterances classifi ed as illocutionary metonymies following the pragmatic typology of metonymic functions are presented. Th e inferencing processes are facilitated by metonymic connections existing between domains or subdomains in the same functional domain. It has been widely accepted by cognitive linguists that universal human knowledge and embodiment are essential for the interpretation of metonymy. Th is analysis points to the role of cultural background knowledge in understanding target meanings. All these aspects of metonymic connections are exploited in complex inferential processes in spoken discourse. In most cases, metaphoric mappings are also a part of utterance interpretation.

  15. Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2015-06-01

    Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.

  16. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    Science.gov (United States)

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  17. Parental Involvement and English Language Teaching to Young Learners: Parents' Experience in Aceh

    OpenAIRE

    Wati, Shafrida

    2015-01-01

    The interest of teaching English to young learners increased rapidly since the language has significant influence in the modern world. English is strongly associated with social and economic power in globalization's context. Introducing English earlier offers opportunities to awaken the learners' enthusiasm and curiosity about the language, to achieve native-like accent, and to enable them to learn the language easily at further levels. However, there are controversies, particularly, about th...

  18. The determinants of spoken and written picture naming latencies.

    Science.gov (United States)

    Bonin, Patrick; Chalard, Marylène; Méot, Alain; Fayol, Michel

    2002-02-01

    The influence of nine variables on the latencies to write down or to speak aloud the names of pictures taken from Snodgrass and Vanderwart (1980) was investigated in French adults. The major determinants of both written and spoken picture naming latencies were image variability, image agreement and age of acquisition. To a lesser extent, name agreement was also found to have an impact in both production modes. The implications of the findings for theoretical views of both spoken and written picture naming are discussed.

  19. Rhythm in language acquisition.

    Science.gov (United States)

    Langus, Alan; Mehler, Jacques; Nespor, Marina

    2017-10-01

    Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Experience in the design, implementation and use of PL-11, a programming language for the PDP-11

    CERN Document Server

    Russell, R D

    1976-01-01

    PL-11 is a programming language for the PDP-11 family of computers designed and implemented as part of the OMEGA Project at CERN (the European Organization for Nuclear Research). Its purpose is to provide an effective tool for both physicists and systems programmers to use in building real time data acquisition systems that are online to high-energy physics experiments. It is a fairly typical member of the PL-class of programming languages which are based on the initial design of PL360. (44 refs).