WorldWideScience

Sample records for spoken language interaction

  1. A real-time spoken-language system for interactive problem-solving, combining linguistic and statistical technology for improved spoken language understanding

    Science.gov (United States)

    Moore, Robert C.; Cohen, Michael H.

    1993-09-01

    Under this effort, SRI has developed spoken-language technology for interactive problem solving, featuring real-time performance for up to several thousand word vocabularies, high semantic accuracy, habitability within the domain, and robustness to many sources of variability. Although the technology is suitable for many applications, efforts to date have focused on developing an Air Travel Information System (ATIS) prototype application. SRI's ATIS system has been evaluated in four ARPA benchmark evaluations, and has consistently been at or near the top in performance. These achievements are the result of SRI's technical progress in speech recognition, natural-language processing, and speech and natural-language integration.

  2. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  3. Web-based mini-games for language learning that support spoken interaction

    CSIR Research Space (South Africa)

    Strik, H

    2015-09-01

    Full Text Available The European ‘Lifelong Learning Programme’ (LLP) project ‘Games Online for Basic Language learning’ (GOBL) aimed to provide youths and adults wishing to improve their basic language skills access to materials for the development of communicative...

  4. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  5. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  6. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    Science.gov (United States)

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  7. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  8. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... languages and can be used for the purposes of spoken language identification. Keywords. SLID .... branch of linguistics to study the sound structure of human language. ... countries, work in the area of Indian language identification has not ...... English and speech database has been collected over tele-.

  9. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  10. CROATIAN ADULT SPOKEN LANGUAGE CORPUS (HrAL

    Directory of Open Access Journals (Sweden)

    Jelena Kuvač Kraljević

    2016-01-01

    Full Text Available Interest in spoken-language corpora has increased over the past two decades leading to the development of new corpora and the discovery of new facets of spoken language. These types of corpora represent the most comprehensive data source about the language of ordinary speakers. Such corpora are based on spontaneous, unscripted speech defined by a variety of styles, registers and dialects. The aim of this paper is to present the Croatian Adult Spoken Language Corpus (HrAL, its structure and its possible applications in different linguistic subfields. HrAL was built by sampling spontaneous conversations among 617 speakers from all Croatian counties, and it comprises more than 250,000 tokens and more than 100,000 types. Data were collected during three time slots: from 2010 to 2012, from 2014 to 2015 and during 2016. HrAL is today available within TalkBank, a large database of spoken-language corpora covering different languages (https://talkbank.org, in the Conversational Analyses corpora within the subsection titled Conversational Banks. Data were transcribed, coded and segmented using the transcription format Codes for Human Analysis of Transcripts (CHAT and the Computerised Language Analysis (CLAN suite of programmes within the TalkBank toolkit. Speech streams were segmented into communication units (C-units based on syntactic criteria. Most transcripts were linked to their source audios. The TalkBank is public free, i.e. all data stored in it can be shared by the wider community in accordance with the basic rules of the TalkBank. HrAL provides information about spoken grammar and lexicon, discourse skills, error production and productivity in general. It may be useful for sociolinguistic research and studies of synchronic language changes in Croatian.

  11. SPOKEN-LANGUAGE FEATURES IN CASUAL CONVERSATION A Case of EFL Learners‘ Casual Conversation

    Directory of Open Access Journals (Sweden)

    Aris Novi

    2017-12-01

    Full Text Available Spoken text differs from written one in its features of context dependency, turn-taking organization, and dynamic structure. EFL learners; however, sometime find it difficult to produce typical characteristics of spoken language, particularly in casual talk. When they are asked to conduct a conversation, some of them tend to be script-based which is considered unnatural. Using the theory of Thornburry (2005, this paper aims to analyze characteristics of spoken language in casual conversation which cover spontaneity, interactivity, interpersonality, and coherence. This study used discourse analysis to reveal four features in turns and moves of three casual conversations. The findings indicate that not all sub-features used in the conversation. In this case, the spontaneity features were used 132 times; the interactivity features were used 1081 times; the interpersonality features were used 257 times; while the coherence features (negotiation features were used 526 times. Besides, the results also present that some participants seem to dominantly produce some sub-features naturally and vice versa. Therefore, this finding is expected to be beneficial to provide a model of how spoken interaction should be carried out. More importantly, it could raise English teachers or lecturers‘ awareness in teaching features of spoken language, so that, the students could develop their communicative competence as the native speakers of English do.

  12. On-Line Syntax: Thoughts on the Temporality of Spoken Language

    Science.gov (United States)

    Auer, Peter

    2009-01-01

    One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…

  13. Spoken language outcomes after hemispherectomy: factoring in etiology.

    Science.gov (United States)

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  14. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  15. Cohesion as interaction in ELF spoken discourse

    Directory of Open Access Journals (Sweden)

    T. Christiansen

    2013-10-01

    Full Text Available Hitherto, most research into cohesion has concentrated on texts (usually written only in standard Native Speaker English – e.g. Halliday and Hasan (1976. By contrast, following on the work in anaphora of such scholars as Reinhart (1983 and Cornish (1999, Christiansen (2011 describes cohesion as an interac­tive process focusing on the link between text cohesion and discourse coherence. Such a consideration of cohesion from the perspective of discourse (i.e. the process of which text is the product -- Widdowson 1984, p. 100 is especially relevant within a lingua franca context as the issue of different variations of ELF and inter-cultural concerns (Guido 2008 add extra dimensions to the complex multi-code interaction. In this case study, six extracts of transcripts (approximately 1000 words each, taken from the VOICE corpus (2011 of conference question and answer sessions (spoken interaction set in multicultural university con­texts are analysed in depth by means of a qualitative method.

  16. Inferring Speaker Affect in Spoken Natural Language Communication

    OpenAIRE

    Pon-Barry, Heather Roberta

    2012-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, ...

  17. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  18. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    Science.gov (United States)

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  19. Rethinking spoken fluency

    OpenAIRE

    McCarthy, Michael

    2009-01-01

    This article re-examines the notion of spoken fluency. Fluent and fluency are terms commonly used in everyday, lay language, and fluency, or lack of it, has social consequences. The article reviews the main approaches to understanding and measuring spoken fluency and suggest that spoken fluency is best understood as an interactive achievement, and offers the metaphor of ‘confluence’ to replace the term fluency. Many measures of spoken fluency are internal and monologue-based, whereas evidence...

  20. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  1. Native Language Spoken as a Risk Marker for Tooth Decay.

    Science.gov (United States)

    Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M

    2015-01-01

    The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.

  2. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  3. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.J.; Swerts, M.G.J.; Theune, M.; Weegels, M.F.

    2001-01-01

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  4. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  5. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  6. Development of Mandarin spoken language after pediatric cochlear implantation.

    Science.gov (United States)

    Li, Bei; Soli, Sigfrid D; Zheng, Yun; Li, Gang; Meng, Zhaoli

    2014-07-01

    The purpose of this study was to evaluate early spoken language development in young Mandarin-speaking children during the first 24 months after cochlear implantation, as measured by receptive and expressive vocabulary growth rates. Growth rates were compared with those of normally hearing children and with growth rates for English-speaking children with cochlear implants. Receptive and expressive vocabularies were measured with the simplified short form (SSF) version of the Mandarin Communicative Development Inventory (MCDI) in a sample of 112 pediatric implant recipients at baseline, 3, 6, 12, and 24 months after implantation. Implant ages ranged from 1 to 5 years. Scores were expressed in terms of normal equivalent ages, allowing normalized vocabulary growth rates to be determined. Scores for English-speaking children were re-expressed in these terms, allowing direct comparisons of Mandarin and English early spoken language development. Vocabulary growth rates during the first 12 months after implantation were similar to those for normally hearing children less than 16 months of age. Comparisons with growth rates for normally hearing children 16-30 months of age showed that the youngest implant age group (1-2 years) had an average growth rate of 0.68 that of normally hearing children; while the middle implant age group (2-3 years) had an average growth rate of 0.65; and the oldest implant age group (>3 years) had an average growth rate of 0.56, significantly less than the other two rates. Growth rates for English-speaking children with cochlear implants were 0.68 in the youngest group, 0.54 in the middle group, and 0.57 in the oldest group. Growth rates in the middle implant age groups for the two languages differed significantly. The SSF version of the MCDI is suitable for assessment of Mandarin language development during the first 24 months after cochlear implantation. Effects of implant age and duration of implantation can be compared directly across

  7. Language and Culture in the Multiethnic Community: Spoken Language Assessment.

    Science.gov (United States)

    Matluck, Joseph H.; Mace-Matluck, Betty J.

    This paper discusses the sociolinguistic problems inherent in multilingual testing, and the accompanying dangers of cultural bias in either the visuals or the language used in a given test. The first section discusses English-speaking Americans' perception of foreign speakers in terms of: (1) physical features; (2) speech, specifically vocabulary,…

  8. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    Science.gov (United States)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  9. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.

    2016-01-01

    BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  10. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2016-01-01

    Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  11. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  12. Attentional Capture of Objects Referred to by Spoken Language

    Science.gov (United States)

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  13. The employment of a spoken language computer applied to an air traffic control task.

    Science.gov (United States)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  14. Iconicity as a general property of language: evidence from spoken and signed languages

    Directory of Open Access Journals (Sweden)

    Pamela Perniss

    2010-12-01

    Full Text Available Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to hook up to motor and perceptual experience.

  15. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  16. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    Science.gov (United States)

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  17. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  18. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  19. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  20. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    Science.gov (United States)

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  1. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  3. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  5. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    Science.gov (United States)

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    Science.gov (United States)

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  7. What Comes First, What Comes Next: Information Packaging in Written and Spoken Language

    Directory of Open Access Journals (Sweden)

    Vladislav Smolka

    2017-07-01

    Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.

  8. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    Science.gov (United States)

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  9. THE IMPLEMENTATION OF COMMUNICATIVE LANGUAGE TEACHING (CLT TO TEACH SPOKEN RECOUNTS IN SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Eri Rusnawati

    2016-10-01

    Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa

  10. Acquisition of graphic communication by a young girl without comprehension of spoken language.

    Science.gov (United States)

    von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R

    To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.

  11. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    Science.gov (United States)

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  12. Personality Structure in the Trait Lexicon of Hindi, a Major Language Spoken in India

    NARCIS (Netherlands)

    Singh, Jitendra K.; Misra, Girishwar; De Raad, Boele

    2013-01-01

    The psycho-lexical approach is extended to Hindi, a major language spoken in India. From both the dictionary and from Hindi novels, a huge set of personality descriptors was put together, ultimately reduced to a manageable set of 295 trait terms. Both self and peer ratings were collected on those

  13. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    Science.gov (United States)

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  14. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  15. Phonotactic spoken language identification with limited training data

    CSIR Research Space (South Africa)

    Peche, M

    2007-08-01

    Full Text Available The authors investigate the addition of a new language, for which limited resources are available, to a phonotactic language identification system. Two classes of approaches are studied: in the first class, only existing phonetic recognizers...

  16. Factors Influencing Verbal Intelligence and Spoken Language in Children with Phenylketonuria.

    Science.gov (United States)

    Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre

    2015-05-01

    To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.

  17. Cochlear implants and spoken language processing abilities: Review and assessment of the literature

    OpenAIRE

    Peterson, Nathaniel R.; Pisoni, David B.; Miyamoto, Richard T.

    2010-01-01

    Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading...

  18. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  19. Retinoic acid signaling: a new piece in the spoken language puzzle

    Directory of Open Access Journals (Sweden)

    Jon-Ruben eVan Rhijn

    2015-11-01

    Full Text Available Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms that encode these pathways will shed light on how humans can effortlessly and innately use spoken language and could elucidate what goes wrong in speech-language disorders.FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that also includes receptive and expressive language impairments. The underlying neuro-molecular mechanisms controlled by FOXP2, which will give insight into our capacity for speech-motor control, are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid signaling and to modify the cellular response to retinoic acid, a key regulator of brain development. Herein we explore the evidence that FOXP2 and retinoic acid signaling function in the same pathways. We present evidence at molecular, cellular and behavioral levels that suggest an interplay between FOXP2 and retinoic acid that may be important for fine motor control and speech-motor output. We propose that retinoic acid signaling is an exciting new angle from which to investigate how neurogenetic mechanisms can contribute to the (spoken language ready brain.

  20. Predictors of spoken language development following pediatric cochlear implantation.

    Science.gov (United States)

    Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid

    2012-01-01

    Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple

  1. ORIGINAL ARTICLES How do doctors learn the spoken language of ...

    African Journals Online (AJOL)

    2009-07-01

    Jul 1, 2009 ... and cultural metaphors of illness as part of language learning. The theory of .... role.21 Even in a military setting, where soldiers learnt Korean or Spanish as part of ... own language – a cross-cultural survey. Brit J Gen Pract ...

  2. Predictors of Spoken Language Development Following Pediatric Cochlear Implantation

    NARCIS (Netherlands)

    Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters

    2012-01-01

    Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to

  3. Interaction in Spoken Word Recognition Models: Feedback Helps

    Science.gov (United States)

    Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593

  4. Interaction in Spoken Word Recognition Models: Feedback Helps.

    Science.gov (United States)

    Magnuson, James S; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  5. Interaction in Spoken Word Recognition Models: Feedback Helps

    Directory of Open Access Journals (Sweden)

    James S. Magnuson

    2018-04-01

    Full Text Available Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  6. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  7. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  8. Identification of four class emotion from Indonesian spoken language using acoustic and lexical features

    Science.gov (United States)

    Kasyidi, Fatan; Puji Lestari, Dessi

    2018-03-01

    One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.

  9. The interface between spoken and written language: developmental disorders.

    Science.gov (United States)

    Hulme, Charles; Snowling, Margaret J

    2014-01-01

    We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter-sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills).

  10. Loops of Spoken Language i Danish Broadcasting Corporation News

    DEFF Research Database (Denmark)

    le Fevre Jakobsen, Bjarne

    2012-01-01

    The tempo of Danish television news broadcasts has changed markedly over the past 40 years, while the language has essentially always been conservative, and remains so today. The development in the tempo of the broadcasts has gone through a number of phases from a newsreader in a rigid structure...

  11. IMPACT ON THE INDIGENOUS LANGUAGES SPOKEN IN NIGERIA ...

    African Journals Online (AJOL)

    In the face of globalisation, the scale of communication is increasing from being merely .... capital goods and services across national frontiers involving too, political contexts of ... auditory and audiovisual entertainment, the use of English dominates. The language .... manners, entertainment, sports, the legal system, etc.

  12. Does it really matter whether students' contributions are spoken versus typed in an intelligent tutoring system with natural language?

    Science.gov (United States)

    D'Mello, Sidney K; Dowell, Nia; Graesser, Arthur

    2011-03-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.

  13. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A step beyond local observations with a dialog aware bidirectional GRU network for Spoken Language Understanding

    OpenAIRE

    Vukotic , Vedran; Raymond , Christian; Gravier , Guillaume

    2016-01-01

    International audience; Architectures of Recurrent Neural Networks (RNN) recently become a very popular choice for Spoken Language Understanding (SLU) problems; however, they represent a big family of different architectures that can furthermore be combined to form more complex neural networks. In this work, we compare different recurrent networks, such as simple Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Gated Memory Units (GRU) and their bidirectional versions,...

  15. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    Science.gov (United States)

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  16. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  17. The Attitudes and Motivation of Children towards Learning Rarely Spoken Foreign Languages: A Case Study from Saudi Arabia

    Science.gov (United States)

    Al-Nofaie, Haifa

    2018-01-01

    This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…

  18. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Kate H

    10. 10. Content. 25. Conveying interaction between lecturer and students. 5. 5. Managing equipment. Breath control and volume. 5. Intonation and voice quality. Use of coping techniques. Pronunciation and general clarity. Error correction. Fluency of interpreting product (hesitations, silences etc.) Interpreting competency. 15.

  19. The missing foundation in teacher education: Knowledge of the structure of spoken and written language.

    Science.gov (United States)

    Moats, L C

    1994-01-01

    Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.

  20. Endowing Spoken Language Dialogue System with Emotional Intelligence

    DEFF Research Database (Denmark)

    André, Elisabeth; Rehm, Matthias; Minker, Wolfgang

    2004-01-01

    While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic variations in order to improve the user’s perception of the interaction. To accomplish this goal, our approach integrates a social theory of polite...... of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available....

  1. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    Science.gov (United States)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  2. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  3. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    Science.gov (United States)

    Feenaughty, Lynda

    judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally

  4. Spoken language development in oral preschool children with permanent childhood deafness.

    Science.gov (United States)

    Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.

  5. Spoken language achieves robustness and evolvability by exploiting degeneracy and neutrality.

    Science.gov (United States)

    Winter, Bodo

    2014-10-01

    As with biological systems, spoken languages are strikingly robust against perturbations. This paper shows that languages achieve robustness in a way that is highly similar to many biological systems. For example, speech sounds are encoded via multiple acoustically diverse, temporally distributed and functionally redundant cues, characteristics that bear similarities to what biologists call "degeneracy". Speech is furthermore adequately characterized by neutrality, with many different tongue configurations leading to similar acoustic outputs, and different acoustic variants understood as the same by recipients. This highlights the presence of a large neutral network of acoustic neighbors for every speech sound. Such neutrality ensures that a steady backdrop of variation can be maintained without impeding communication, assuring that there is "fodder" for subsequent evolution. Thus, studying linguistic robustness is not only important for understanding how linguistic systems maintain their functioning upon the background of noise, but also for understanding the preconditions for language evolution. © 2014 WILEY Periodicals, Inc.

  6. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  7. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    Science.gov (United States)

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  8. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  9. The effect of written text on comprehension of spoken English as a foreign language.

    Science.gov (United States)

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.

  10. Foreign language interactive didactics

    Directory of Open Access Journals (Sweden)

    Arnaldo Moisés Gómez

    2016-06-01

    Full Text Available Foreign Language Interactive Didactics is intended for foreign language teachers and would-be teachers since it is an interpretation of the foreign language teaching-learning process is conceived from a reflexive social interaction. This interpretation declares learning based on interactive tasks that provide learners with opportunities to interact meaningfully among them, as a way to develop interactional competence as objective in itself and as a means to obtain communicative competence. Foreign language interactive didactics claims for the unity of reflection and action while learning the language system and using it to communicate, by means of solving problems presented in interactive tasks. It proposes a kind of teaching that is interactive, developmental, collaborative, holist, cognitive, problematizing, reflexive, student centered, humanist, and with a strong affective component that empower the influencing psychological factors in learning. This conception appears in the book: DIDÁCTICA INTERACTIVA DE LENGUAS (2007 y 2010. The book is used as a textbook for the subject of Didactics that is part of the curriculum in language teachers’ formation of all the pedagogical sciences universities, in Spanish teachers’ formation who are not Spanish speaking people at Havana University, and also as a reference book for postgraduate courses, master’s and doctorate’ s degrees.

  11. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    Science.gov (United States)

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  12. The relation of the number of languages spoken to performance in different cognitive abilities in old age.

    Science.gov (United States)

    Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias

    2016-12-01

    Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.

  13. Effects of early auditory experience on the spoken language of deaf children at 3 years of age.

    Science.gov (United States)

    Nicholas, Johanna Grant; Geers, Ann E

    2006-06-01

    By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44

  14. Cochlear implants and spoken language processing abilities: review and assessment of the literature.

    Science.gov (United States)

    Peterson, Nathaniel R; Pisoni, David B; Miyamoto, Richard T

    2010-01-01

    Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading). However, there is wide variation in individual outcomes following cochlear implantation, and some CI recipients never develop useable speech and oral language skills. The causes of this enormous variation in outcomes are only partly understood at the present time. The variables most strongly associated with language outcomes are age at implantation and mode of communication in rehabilitation. Thus, some of the more important factors determining success of cochlear implantation are broadly related to neural plasticity that appears to be transiently present in deaf individuals. In this article we review the expected outcomes of cochlear implantation, potential predictors of those outcomes, the basic science regarding critical and sensitive periods, and several new research directions in the field of cochlear implantation.

  15. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  16. Investigating Joint Attention Mechanisms through Spoken Human-Robot Interaction

    Science.gov (United States)

    Staudte, Maria; Crocker, Matthew W.

    2011-01-01

    Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker's…

  17. Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study.

    Science.gov (United States)

    Werfel, Krystal L

    2017-10-05

    The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

  18. Foreign body aspiration and language spoken at home: 10-year review.

    Science.gov (United States)

    Choroomi, S; Curotta, J

    2011-07-01

    To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.

  19. Spontal-N: A Corpus of Interactional Spoken Norwegian

    OpenAIRE

    Sikveland, A.; Öttl, A.; Amdal, I.; Ernestus, M.; Svendsen, T.; Edlund, J.

    2010-01-01

    Spontal-N is a corpus of spontaneous, interactional Norwegian. To our knowledge, it is the first corpus of Norwegian in which the majority of speakers have spent significant parts of their lives in Sweden, and in which the recorded speech displays varying degrees of interference from Swedish. The corpus consists of studio quality audio- and video-recordings of four 30-minute free conversations between acquaintances, and a manual orthographic transcription of the entire material. On basis of t...

  20. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  1. Verbal short-term memory development and spoken language outcomes in deaf children with cochlear implants.

    Science.gov (United States)

    Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B

    2013-01-01

    Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures

  2. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach

    Science.gov (United States)

    Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546

  3. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    Science.gov (United States)

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  4. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    Science.gov (United States)

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  5. THE INFLUENCE OF LANGUAGE USE AND LANGUAGE ATTITUDE ON THE MAINTENANCE OF COMMUNITY LANGUAGES SPOKEN BY MIGRANT STUDENTS

    Directory of Open Access Journals (Sweden)

    Leni Amalia Suek

    2014-05-01

    Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.

  6. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates.

    Science.gov (United States)

    Petkov, Christopher I; Jarvis, Erich D

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.

  7. RECEPTION OF SPOKEN ENGLISH. MISHEARINGS IN THE LANGUAGE OF BUSINESS AND LAW

    Directory of Open Access Journals (Sweden)

    HOREA Ioana-Claudia

    2013-07-01

    Full Text Available Spoken English may sometimes cause us to face a peculiar problem in respect of the reception and the decoding of auditive signals, which might lead to mishearings. Risen from erroneous perception, from a lack in understanding the communication and an involuntary mental replacement of a certain element or structure by a more familiar one, these mistakes are most frequently encountered in the case of listening to songs, where the melodic line can facilitate the development of confusion by its somewhat altered intonation, which produces the so called mondegreens. Still, instances can be met in all domains of verbal communication, as proven in several examples noticed during classes of English as a foreign language (EFL taught to non-philological subjects. Production and perceptions of language depend on a series of elements that influence the encoding and the decoding of the message. These filters belong to both psychological and semantic categories which can either interfere with the accuracy of emission and reception. Poor understanding of a notion or concept combined with a more familiar relation with a similarly sounding one will result in unconsciously picking the structure which is better known. This means ‘hearing’ something else than it had been said, something closer to the receiver’s preoccupations and baggage of knowledge than the original structure or word. Some mishearings become particularly relevant as they concern teaching English for Specific Purposes (ESP. Such are those encountered during classes of Business English or in English for Law. Though not very likely to occur too often, given an intuitively felt inaccuracy - as the terms are known by the users to need to be more specialised -, such examples are still not ignorable. Thus, we consider they deserve a higher degree of attention, as they might become quite relevant in the global context of an increasing work force migration and a spread of multinational companies.

  8. Development of a spoken language identification system for South African languages

    CSIR Research Space (South Africa)

    Peché, M

    2009-12-01

    Full Text Available , and complicates the design of the system as a whole. Current benchmark results are established by the National Institute of Standards and Technology (NIST) Language Recognition Evaluation (LRE) [12]. Initially started in 1996, the next evaluation was in 2003..., Gunnar Evermann, Mark Gales, Thomas Hain, Dan Kershaw, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, Valtcho Valtchev, and Phil Woodland: “The HTK book. Revised for HTK version 3.3”, Online: http://htk.eng.cam.ac.uk/., 2005. [11] M.A. Zissman...

  9. The role of planum temporale in processing accent variation in spoken language comprehension.

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a

  10. Usable, Real-Time, Interactive Spoken Language Systems

    Science.gov (United States)

    1994-09-01

    Similarly, we included derivations (mostly plurals and possessives) of many open-class words in the domnain. We also added about 400 concatenated word...UueraiCe’l~ usinig a system of’ ’realization 1111C, %%. hiCh map) thle gr-aimmlatcal relation anl argumlent bears to the head onto thle semantic relatio ...syntactic categories as well. Representations of this form contain significantly more internal structure than specialized sublanguage models. This can be

  11. Gendered Language in Interactive Discourse

    Science.gov (United States)

    Hussey, Karen A.; Katz, Albert N.; Leith, Scott A.

    2015-01-01

    Over two studies, we examined the nature of gendered language in interactive discourse. In the first study, we analyzed gendered language from a chat corpus to see whether tokens of gendered language proposed in the gender-as-culture hypothesis (Maltz and Borker in "Language and social identity." Cambridge University Press, Cambridge, pp…

  12. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (Paudiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    Science.gov (United States)

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  14. How appropriate are the English language test requirements for non-UK-trained nurses? A qualitative study of spoken communication in UK hospitals.

    Science.gov (United States)

    Sedgwick, Carole; Garner, Mark

    2017-06-01

    Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical

  15. The role of planum temporale in processing accent variation in spoken language comprehension

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and

  16. Conversational interfaces for task-oriented spoken dialogues: design aspects influencing interaction quality

    NARCIS (Netherlands)

    Niculescu, A.I.

    2011-01-01

    This dissertation focuses on the design and evaluation of speech-based conversational interfaces for task-oriented dialogues. Conversational interfaces are software programs enabling interaction with computer devices through natural language dialogue. Even though processing conversational speech is

  17. Is spoken Danish less intelligible than Swedish?

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent J.; van Bezooijen, Renee; Pacilly, Jos J. A.

    2010-01-01

    The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss circumstantial evidence suggesting that Danish is

  18. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    Science.gov (United States)

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  19. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  20. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    Science.gov (United States)

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  1. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    Science.gov (United States)

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  2. Self-Ratings of Spoken Language Dominance: A Multilingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals

    Science.gov (United States)

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2012-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…

  3. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  4. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  5. "Now We Have Spoken."

    Science.gov (United States)

    Zimmer, Patricia Moore

    2001-01-01

    Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…

  6. Quarterly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits for fiscal...

  7. Quarterly Data for Spoken Language Preferences of Social Security Retirement and Survivor Claimants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from fiscal...

  8. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    Science.gov (United States)

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Task-Oriented Spoken Dialog System for Second-Language Learning

    Science.gov (United States)

    Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…

  10. Propositional Density in Spoken and Written Language of Czech-Speaking Patients with Mild Cognitive Impairment

    Science.gov (United States)

    Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán

    2016-01-01

    Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…

  11. Social Interaction Affects Neural Outcomes of Sign Language Learning As a Foreign Language in Adults.

    Science.gov (United States)

    Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta

    2017-01-01

    Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.

  12. Learning to talk the talk and walk the walk: Interactional competence in academic spoken English

    Directory of Open Access Journals (Sweden)

    Richard F. Young

    2013-04-01

    Full Text Available In this article I present the theory of interactional competence and contrast it with alternative ways of describing a learner’s knowledge of language. The focus of interactional competence is the structure of recurring episodes of face-to-face interaction, episodes that are of social and cultural significance to a community of speakers. Such episodes I call discursive practices, and I argue that participants co-construct a discursive practice through an architecture of interactional resources that is specific to the practice. The resources include rhetorical script, the register of the practice, the turn-taking system, management of topics, the participation framework, and means for signalling boundaries and transitions. I exemplify the theory of interactional competence and the architecture of discursive practice by examining two instances of the same practice: office hours between teaching assistants and undergraduate students at an American university, one in Mathematics, one in Italian as a foreign language. By a close comparison of the interactional resources that participants bring to the two instances, I argue that knowledge and interactional skill are local and practice-specific, and that the joint construction of discursive practice involves participants making use of the resources that they have acquired in previous instances of the same practice.

  13. Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

    Science.gov (United States)

    Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R

    This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant

  14. Bilateral Versus Unilateral Cochlear Implants in Children: A Study of Spoken Language Outcomes

    Science.gov (United States)

    Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of

  15. Bilateral versus unilateral cochlear implants in children: a study of spoken language outcomes.

    Science.gov (United States)

    Sarant, Julia; Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children's intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of screen time, and more time spent

  16. Yearly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2016 Onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from federal...

  17. Yearly Data for Spoken Language Preferences of Supplemental Security Income (Blind & Disabled) (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...

  18. Yearly Data for Spoken Language Preferences of Supplemental Security Income Aged Applicants (2011-Onward)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits from federal fiscal year 2011...

  19. Yearly Data for Spoken Language Preferences of Social Security Disability Insurance Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...

  20. Yearly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2011-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...

  1. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Aged Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits for fiscal years 2014 -...

  2. Quarterly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for fiscal years 2014...

  3. Yearly Data for Spoken Language Preferences of End Stage Renal Disease Medicare Claimants (2016 Onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...

  4. ROILA : RObot Interaction LAnguage

    NARCIS (Netherlands)

    Mubin, O.

    2011-01-01

    The number of robots in our society is increasing rapidly. The number of service robots that interact with everyday people already outnumbers industrial robots. The easiest way to communicate with these service robots, such as Roomba or Nao, would be natural speech. However, the limitations

  5. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    Science.gov (United States)

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  6. Case report: acquisition of three spoken languages by a child with a cochlear implant.

    Science.gov (United States)

    Francis, Alexander L; Ho, Diana Wai Lam

    2003-03-01

    There have been only two reports of multilingual cochlear implant users to date, and both of these were postlingually deafened adults. Here we report the case of a 6-year-old early-deafened child who is acquiring Cantonese, English and Mandarin in Hong Kong. He and two age-matched peers with similar educational backgrounds were tested using common, standardized tests of vocabulary and expressive and receptive language skills (Peabody Picture Vocabulary Test (Revised) and Reynell Developmental Language Scales version II). Results show that this child is acquiring Cantonese, English and Mandarin to a degree comparable to two classmates with normal hearing and similar educational and social backgrounds.

  7. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    Science.gov (United States)

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  8. Cross-Sensory Correspondences and Symbolism in Spoken and Written Language

    Science.gov (United States)

    Walker, Peter

    2016-01-01

    Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…

  9. Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension

    Directory of Open Access Journals (Sweden)

    Brian eRiordan

    2015-05-01

    Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.

  10. The interaction of lexical semantics and cohort competition in spoken word recognition: an fMRI study.

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K

    2011-12-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.

  11. Human inferior colliculus activity relates to individual differences in spoken language learning.

    Science.gov (United States)

    Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M

    2012-03-01

    A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.

  12. SNOT-22: psychometric properties and cross-cultural adaptation into the Portuguese language spoken in Brazil.

    Science.gov (United States)

    Caminha, Guilherme Pilla; Melo Junior, José Tavares de; Hopkins, Claire; Pizzichini, Emilio; Pizzichini, Marcia Margaret Menezes

    2012-12-01

    Rhinosinusitis is a highly prevalent disease and a major cause of high medical costs. It has been proven to have an impact on the quality of life through generic health-related quality of life assessments. However, generic instruments may not be able to factor in the effects of interventions and treatments. SNOT-22 is a major disease-specific instrument to assess quality of life for patients with rhinosinusitis. Nevertheless, there is still no validated SNOT-22 version in our country. Cross-cultural adaptation of the SNOT-22 into Brazilian Portuguese and assessment of its psychometric properties. The Brazilian version of the SNOT-22 was developed according to international guidelines and was broken down into nine stages: 1) Preparation 2) Translation 3) Reconciliation 4) Back-translation 5) Comparison 6) Evaluation by the author of the SNOT-22 7) Revision by committee of experts 8) Cognitive debriefing 9) Final version. Second phase: prospective study consisting of a verification of the psychometric properties, by analyzing internal consistency and test-retest reliability. Cultural adaptation showed adequate understanding, acceptability and psychometric properties. We followed the recommended steps for the cultural adaptation of the SNOT-22 into Portuguese language, producing a tool for the assessment of patients with sinonasal disorders of clinical importance and for scientific studies.

  13. Language and Cognition Interaction Neural Mechanisms

    OpenAIRE

    Perlovsky, Leonid

    2011-01-01

    How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is a...

  14. Examination of validity in spoken language evaluations: Adult onset stuttering following mild traumatic brain injury.

    Science.gov (United States)

    Roth, Carole R; Cornis-Pop, Micaela; Beach, Woodford A

    2015-01-01

    Reports of increased incidence of adult onset stuttering in veterans and service members with mild traumatic brain injury (mTBI) from combat operations in Iraq and Afghanistan lead to a reexamination of the neurogenic vs. psychogenic etiology of stuttering. This article proposes to examine the merit of the dichotomy between neurogenic and psychogenic bases of stuttering, including symptom exaggeration, for the evaluation and treatment of the disorder. Two case studies of adult onset stuttering in service members with mTBI from improvised explosive device blasts are presented in detail. Speech fluency was disrupted by abnormal pauses and speech hesitations, brief blocks, rapid repetitions, and occasional prolongations. There was also wide variability in the frequency of stuttering across topics and conversational situations. Treatment focused on reducing the frequency and severity of dysfluencies and included educational, psychological, environmental, and behavioral interventions. Stuttering characteristics as well as the absence of objective neurological findings ruled out neurogenic basis of stuttering in these two cases and pointed to psychogenic causes. However, the differential diagnosis had only limited value for developing the plan of care. The successful outcomes of the treatment serve to illustrate the complex interaction of neurological, psychological, emotional, and environmental factors of post-concussive symptoms and to underscore the notion that there are many facets to symptom presentation in post-combat health.

  15. Phonological processing of rhyme in spoken language and location in sign language by deaf and hearing participants: a neurophysiological study.

    Science.gov (United States)

    Colin, C; Zuinen, T; Bayard, C; Leybaert, J

    2013-06-01

    Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  16. Assessing oral proficiency in computer-assisted foreign language learning: A study in the context of teletandem interactions

    Directory of Open Access Journals (Sweden)

    Douglas Altamiro CONSOLO

    2015-12-01

    Full Text Available ABSTRACT An innovative aspect in the area of language assessment has been to evaluate oral language proficiency in distant interactions by means of computers. In this paper, we present the results of a qualitative research study that aimed at analyzing features of language spoken in a computer-aided learning and teaching context, which is constituted by teletandem interactions. The data were collected in the scope of the Teletandem Brazil project by means of interviews, audio and video recordings of online interactions, questionnaires and field notes. The results offer contributions for the areas of assessment, teacher education and teaching Portuguese for foreigners.

  17. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  18. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    Science.gov (United States)

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  19. How Does the Linguistic Distance Between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances During Verbal Memory Examination.

    Science.gov (United States)

    Taha, Haitham

    2017-06-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and phonologically similar version (PS). The result showed that for immediate free-recall, the performances were better for the SL and the PS conditions compared to the SA one. However, for the parts of delayed recall and recognition, the results did not reveal any significant consistent effect of diglossia. Accordingly, it was suggested that diglossia has a significant effect on the storage and short term memory functions but not on long term memory functions. The results were discussed in light of different approaches in the field of bilingual memory.

  20. Evaluating Attributions of Delay and Confusion in Young Bilinguals: Special Insights from Infants Acquiring a Signed and a Spoken Language.

    Science.gov (United States)

    Petitto, Laura Ann; Holowka, Siobhan

    2002-01-01

    Examines whether early simultaneous bilingual language exposure causes children to be language delayed or confused. Cites research suggesting normal and parallel linguistic development occurs in each language in young children and young children's dual language developments are similar to monolingual language acquisition. Research on simultaneous…

  1. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  2. Language evolution and human-computer interaction

    Science.gov (United States)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  3. Interaction between Language and Literature

    Directory of Open Access Journals (Sweden)

    Mustafa Serbes

    2017-06-01

    Full Text Available Every society is composed of individuals who share a common culture thorough the language they use. The continuity of this culture and its transmission to the other generations are mostly made through the language. The literary works are those transponders that convey the cultural heritage of the nations to the future and put a light on the past. Although literature is supposed to do so, the language is shaped in the hands of master of languages; namely, literary men. As famous Russian literary critic, writer and philosopher Belinski said, literature is the best expression of a nation’s spirit in the medium of language. In another word, the literature is to add something from the soul, feelings into the work of art by using language.

  4. Language and Cognition Interaction Neural Mechanisms

    Directory of Open Access Journals (Sweden)

    Leonid Perlovsky

    2011-01-01

    Full Text Available How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is acquired from surrounding language “ready-made” and therefore can be acquired early in life. This early acquisition of language in childhood encompasses the entire hierarchy from sounds to words, to phrases, and to highest concepts existing in culture. Cognition is developed from experience. Yet cognition cannot be acquired from experience alone; language is a necessary intermediary, a “teacher.” A mathematical model is developed; it overcomes previous difficulties and leads to a computational theory. This model is consistent with Arbib's “language prewired brain” built on top of mirror neuron system. It models recent neuroimaging data about cognition, remaining unnoticed by other theories. A number of properties of language and cognition are explained, which previously seemed mysterious, including influence of language grammar on cultural evolution, which may explain specifics of English and Arabic cultures.

  5. Language and cognition interaction neural mechanisms.

    Science.gov (United States)

    Perlovsky, Leonid

    2011-01-01

    How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is acquired from surrounding language "ready-made" and therefore can be acquired early in life. This early acquisition of language in childhood encompasses the entire hierarchy from sounds to words, to phrases, and to highest concepts existing in culture. Cognition is developed from experience. Yet cognition cannot be acquired from experience alone; language is a necessary intermediary, a "teacher." A mathematical model is developed; it overcomes previous difficulties and leads to a computational theory. This model is consistent with Arbib's "language prewired brain" built on top of mirror neuron system. It models recent neuroimaging data about cognition, remaining unnoticed by other theories. A number of properties of language and cognition are explained, which previously seemed mysterious, including influence of language grammar on cultural evolution, which may explain specifics of English and Arabic cultures.

  6. Language and Cognition Interaction Neural Mechanisms

    Science.gov (United States)

    Perlovsky, Leonid

    2011-01-01

    How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is acquired from surrounding language “ready-made” and therefore can be acquired early in life. This early acquisition of language in childhood encompasses the entire hierarchy from sounds to words, to phrases, and to highest concepts existing in culture. Cognition is developed from experience. Yet cognition cannot be acquired from experience alone; language is a necessary intermediary, a “teacher.” A mathematical model is developed; it overcomes previous difficulties and leads to a computational theory. This model is consistent with Arbib's “language prewired brain” built on top of mirror neuron system. It models recent neuroimaging data about cognition, remaining unnoticed by other theories. A number of properties of language and cognition are explained, which previously seemed mysterious, including influence of language grammar on cultural evolution, which may explain specifics of English and Arabic cultures. PMID:21876687

  7. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  8. When the Macro Facilitates the Micro: A Study of Regimentation and Emergence in Spoken Interaction

    Science.gov (United States)

    Warriner, Doris S.

    2012-01-01

    In moments of "dispersion, diaspora, and reterritorialization" (Amy Shuman 2006), the personal, the interactional, and the improvised (the "micro") cannot be separated analytically from circulating ideologies, institutional norms, or cultural flows (the "macro"). With a focus on the emergence of identities within social interaction, specifically…

  9. Language configurations of degree-related denotations in the spoken production of a group of Colombian EFL university students: A corpus-based study

    Directory of Open Access Journals (Sweden)

    Wilder Yesid Escobar

    2015-05-01

    Full Text Available Recognizing that developing the competences needed to appropriately use linguistic resources according to contextual characteristics (pragmatics is as important as the cultural-imbedded linguistic knowledge itself (semantics and that both are equally essential to form competent speakers of English in foreign language contexts, we feel this research relies on corpus linguistics to analyze both the scope and the limitations of the sociolinguistic knowledge and the communicative skills of English students at the university level. To such end, a linguistic corpus was assembled, compared to an existing corpus of native speakers, and analyzed in terms of the frequency, overuse, underuse, misuse, ambiguity, success, and failure of the linguistic parameters used in speech acts. The findings herein describe the linguistic configurations employed to modify levels and degrees of descriptions (salient sematic theme exhibited in the EFL learners´ corpus appealing to the sociolinguistic principles governing meaning making and language use which are constructed under the social conditions of the environments where the language is naturally spoken for sociocultural exchange.

  10. Interaction and Instructed Second Language Acquisition

    Science.gov (United States)

    Loewen, Shawn; Sato, Masatoshi

    2018-01-01

    Interaction is an indispensable component in second language acquisition (SLA). This review surveys the instructed SLA research, both classroom and laboratory-based, that has been conducted primarily within the interactionist approach, beginning with the core constructs of interaction, namely input, negotiation for meaning, and output. The review…

  11. Transformations: Mobile Interaction & Language Learning

    Science.gov (United States)

    Carroll, Fiona; Kop, Rita; Thomas, Nathan; Dunning, Rebecca

    2015-01-01

    Mobile devices and the interactions that these technologies afford have the potential to change the face and nature of education in our schools. Indeed, mobile technological advances are seen to offer better access to educational material and new interactive ways to learn. However, the question arises, as to whether these new technologies are…

  12. Cross-Cultural Differences in Beliefs and Practices that Affect the Language Spoken to Children: Mothers with Indian and Western Heritage

    Science.gov (United States)

    Simmons, Noreen; Johnston, Judith

    2007-01-01

    Background: Speech-language pathologists often advise families about interaction patterns that will facilitate language learning. This advice is typically based on research with North American families of European heritage and may not be culturally suited for non-Western families. Aims: The goal of the project was to identify differences in the…

  13. Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2014-2015)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits for fiscal...

  14. Social Security Administration - Quarterly Data for Spoken Language Preferences of Supplemental Security Income Blind and Disabled Applicants (2016-onwards)

    Data.gov (United States)

    Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from fiscal...

  15. Domain Specific Languages for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus

    This dissertation shows how domain specific languages may be applied to the domain of interactive Web services to obtain flexible, safe, and efficient solutions. We show how each of four key aspects of interactive Web services involving sessions, dynamic creation of HTML/XML documents, form field......, , that supports virtually all aspects of the development of interactive Web services and provides flexible, safe, and efficient solutions....

  16. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  17. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  18. Declarative language design for interactive visualization.

    Science.gov (United States)

    Heer, Jeffrey; Bostock, Michael

    2010-01-01

    We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.

  19. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  20. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A Pilot Study of Telepractice for Teaching Listening and Spoken Language to Mandarin-Speaking Children with Congenital Hearing Loss

    Science.gov (United States)

    Chen, Pei-Hua; Liu, Ting-Wei

    2017-01-01

    Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…

  2. Spoken language and everyday functioning in 5-year-old children using hearing aids or cochlear implants.

    Science.gov (United States)

    Cupples, Linda; Ching, Teresa Yc; Button, Laura; Seeto, Mark; Zhang, Vicky; Whitfield, Jessica; Gunnourie, Miriam; Martin, Louise; Marnane, Vivienne

    2017-09-12

    This study investigated the factors influencing 5-year language, speech and everyday functioning of children with congenital hearing loss. Standardised tests including PLS-4, PPVT-4 and DEAP were directly administered to children. Parent reports on language (CDI) and everyday functioning (PEACH) were collected. Regression analyses were conducted to examine the influence of a range of demographic variables on outcomes. Participants were 339 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children's average receptive and expressive language scores were approximately 1 SD below the mean of typically developing children, and scores on speech production and everyday functioning were more than 1 SD below. Regression models accounted for 70-23% of variance in scores across different tests. Earlier CI switch-on and higher non-verbal ability were associated with better outcomes in most domains. Earlier HA fitting and use of oral communication were associated with better outcomes on directly administered language assessments. Severity of hearing loss and maternal education influenced outcomes of children with HAs. The presence of additional disabilities affected outcomes of children with CIs. The findings provide strong evidence for the benefits of early HA fitting and early CI for improving children's outcomes.

  3. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  4. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  5. The Interactional Architecture of the Language Classroom

    Directory of Open Access Journals (Sweden)

    Paul Seedhouse

    2009-11-01

    Full Text Available This article provides a summary of some of the key ideas of Seedhouse (2004. The study applies Conversation Analysis (CA methodology to an extensive and varied database of language lessons from around the world and attempts to answer the question ‘How is L2 classroom interaction organised?’ The main thesis is that there is a reflexive relationship between pedagogy and interaction in the L2 classroom. This means that there is a two-way, mutually dependent relationship. Furthermore, this relationship is the foundation of the organisation of interaction in L2 classrooms. The omnipresent and unique feature of the L2 classroom is this reflexive relationship between pedagogy and interaction. So whoever is taking part in L2 classroom interaction and whatever the particular activity during which the interactants are speaking the L2, they are always displaying to one another their analyses of the current state of the evolving relationship between pedagogy and interaction and acting on the basis of these analyses. So interaction in the L2 classroom is based on the relationship between pedagogy and interaction. Interactants are constantly analysing this relationship and displaying their analyses in their talk. An example of data analysis is provided, including discussion of socially distributed cognition and learning.

  6. Cross-cultural differences in beliefs and practices that affect the language spoken to children: mothers with Indian and Western heritage.

    Science.gov (United States)

    Simmons, Noreen; Johnston, Judith

    2007-01-01

    Speech-language pathologists often advise families about interaction patterns that will facilitate language learning. This advice is typically based on research with North American families of European heritage and may not be culturally suited for non-Western families. The goal of the project was to identify differences in the beliefs and practices of Indian and Euro-Canadian mothers that would affect patterns of talk to children. A total of 47 Indian mothers and 51 Euro-Canadian mothers of preschool age children completed a written survey concerning child-rearing practices and beliefs, especially those about talk to children. Discriminant analyses indicated clear cross-cultural differences and produced functions that could predict group membership with a 96% accuracy rate. Items contributing most to these functions concerned the importance of family, perceptions of language learning, children's use of language in family and society, and interactions surrounding text. Speech-language pathologists who wish to adapt their services for families of Indian heritage should remember the centrality of the family, the likelihood that there will be less emphasis on early independence and achievement, and the preference for direct instruction.

  7. LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI

    Directory of Open Access Journals (Sweden)

    Süleyman Kaan YALÇIN (M.A.H.

    2008-12-01

    Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.

  8. Informal Language Learning Setting: Technology or Social Interaction?

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    Based on the informal language learning theory, language learning can occur outside the classroom setting unconsciously and incidentally through interaction with the native speakers or exposure to authentic language input through technology. However, an EFL context lacks the social interaction which naturally occurs in an ESL context. To explore…

  9. Language Maintenance in a Multilingual Family: Informal Heritage Language Lessons in Parent-Child Interactions

    OpenAIRE

    Kheirkhah, Mina; Cekaite, Asta

    2015-01-01

    The present study explores language socialization patterns in a Persian-Kurdish family in Sweden and examines how "one-parent, one-language" family language policies are instantiated and negotiated in parent-child interactions. The data consist of video-recordings and ethnographic observations of family interactions, as well as interviews. Detailed interactional analysis is employed to investigate parental language maintenance efforts and the childs agentive orientation in relation to the rec...

  10. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  11. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    Science.gov (United States)

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  12. Language Maintenance in a Multilingual Family: Informal Heritage Language Lessons in Parent-Child Interactions

    Science.gov (United States)

    Kheirkhah, Mina; Cekaite, Asta

    2015-01-01

    The present study explores language socialization patterns in a Persian-Kurdish family in Sweden and examines how "one-parent, one-language" family language policies are instantiated and negotiated in parent-child interactions. The data consist of video-recordings and ethnographic observations of family interactions, as well as…

  13. Can Non-Interactive Language Input Benefit Young Second-Language Learners?

    Science.gov (United States)

    Au, Terry Kit-fong; Chan, Winnie Wailan; Cheng, Liao; Siegel, Linda S.; Tso, Ricky Van Yip

    2015-01-01

    To fully acquire a language, especially its phonology, children need linguistic input from native speakers early on. When interaction with native speakers is not always possible--e.g. for children learning a second language that is not the societal language--audios are commonly used as an affordable substitute. But does such non-interactive input…

  14. Use of Automated Scoring in Spoken Language Assessments for Test Takers with Speech Impairments. Research Report. ETS RR-17-42

    Science.gov (United States)

    Loukina, Anastassia; Buzick, Heather

    2017-01-01

    This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…

  15. Fictive Interaction : The conversation frame in thought, language, and discourse

    NARCIS (Netherlands)

    Pascual, Esther

    2014-01-01

    Language is intimately related to interaction. The question arises: Is the structure of interaction somehow mirrored in language structure and use? This book suggests a positive answer to this question by examining the ubiquitous phenomenon of fictive interaction, in which non-genuine conversational

  16. Making a Difference: Language Teaching for Intercultural and International Dialogue

    Science.gov (United States)

    Byram, Michael; Wagner, Manuela

    2018-01-01

    Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…

  17. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  18. Emotion in languaging: Language and emotion as affective, adaptive and flexible behavior in social interaction

    Directory of Open Access Journals (Sweden)

    Thomas Wiben Jensen

    2014-07-01

    Full Text Available This article argues for a view on languaging as inherently affective. Informed by recent ecological tendencies within cognitive science and distributed language studies a distinction between first order languaging (language as whole-body sense making and second order language (language as system like constraints is put forward. Contrary to common assumptions within linguistics and communication studies separating language-as-a-system from language use (resulting in separations between language vs. body-language and verbal vs. non-verbal communication etc. the first/second order distinction sees language as emanating from behavior making it possible to view emotion and affect as integral parts languaging behavior. Likewise, emotion and affect are studied, not as inner mental states, but as processes of organism-environment interactions. Based on video recordings of interaction between 1 children with special needs, and 2 couple in therapy and the therapist patterns of reciprocal influences between interactants are examined. Through analyzes of affective stance and patterns of inter-affectivity it is exemplified how language and emotion should not be seen as separate phenomena combined in language use, but rather as completely intertwined phenomena in languaging behavior constrained by second order patterns.

  19. Spoken Grammar for Chinese Learners

    Institute of Scientific and Technical Information of China (English)

    徐晓敏

    2013-01-01

    Currently, the concept of spoken grammar has been mentioned among Chinese teachers. However, teach-ers in China still have a vague idea of spoken grammar. Therefore this dissertation examines what spoken grammar is and argues that native speakers’ model of spoken grammar needs to be highlighted in the classroom teaching.

  20. Second Language Interaction: Current Perspectives and Future Trends.

    Science.gov (United States)

    Chalhoub-Deville, Micheline

    2003-01-01

    Considers how the nature of interaction may best be represented in the second language (L2) construct. The starting point is Bachman's model of communicative language ability, which, it is argued, incorporates interaction from an individual-focused cognitive perspective. (Author/VWL)

  1. Invariance Detection within an Interactive System: A Perceptual Gateway to Language Development

    Science.gov (United States)

    Gogate, Lakshmi J.; Hollich, George

    2010-01-01

    In this article, we hypothesize that "invariance detection," a general perceptual phenomenon whereby organisms attend to relatively stable patterns or regularities, is an important means by which infants tune in to various aspects of spoken language. In so doing, we synthesize a substantial body of research on detection of regularities across the…

  2. Discussion Forum Interactions: Text and Context

    Science.gov (United States)

    Montero, Begona; Watts, Frances; Garcia-Carbonell, Amparo

    2007-01-01

    Computer-mediated communication (CMC) is currently used in language teaching as a bridge for the development of written and spoken skills [Kern, R., 1995. "Restructuring classroom interaction with networked computers: effects on quantity and characteristics of language production." "The Modern Language Journal" 79, 457-476]. Within CMC…

  3. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    Science.gov (United States)

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  4. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  5. Skype me! Socially Contingent Interactions Help Toddlers Learn Language

    OpenAIRE

    Roseberry, Sarah; Hirsh-Pasek, Kathy; Golinkoff, Roberta Michnick

    2013-01-01

    Language learning takes place in the context of social interactions, yet the mechanisms that render social interactions useful for learning language remain unclear. This paper focuses on whether social contingency might support word learning. Toddlers aged 24- to 30-months (N=36) were exposed to novel verbs in one of three conditions: live interaction training, socially contingent video training over video chat, and non-contingent video training (yoked video). Results sugges...

  6. HI-VISUAL: A language supporting visual interaction in programming

    International Nuclear Information System (INIS)

    Monden, N.; Yoshino, Y.; Hirakawa, M.; Tanaka, M.; Ichikawa, T.

    1984-01-01

    This paper presents a language named HI-VISUAL which supports visual interaction in programming. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL are extensively discussed. HI-VISUAL also shows a system extensively discussed. HI-VISUAL also shows a system extendability providing the possibility of organizing a high level application system as an integration of several existing subsystems, and will serve to developing systems in various fields of applications supporting simple and efficient interactions between programmer and computer. In this paper, the authors have presented a language named HI-VISUAL. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL were extensively discussed

  7. How relevant is social interaction in second language learning?

    Directory of Open Access Journals (Sweden)

    Laura eVerga

    2013-09-01

    Full Text Available Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthen by evidence emerging from different fields, which clearly indicate that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as it is the case for Autism, for example. However, studies on adult second language learning have been mostly focused on individualistic approaches, partly because of methodological constraints especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.

  8. How relevant is social interaction in second language learning?

    Science.gov (United States)

    Verga, Laura; Kotz, Sonja A

    2013-09-03

    Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.

  9. Parent-child interaction: Does parental language matter?

    Science.gov (United States)

    Menashe, Atara; Atzaba-Poria, Naama

    2016-11-01

    Although parental language and behaviour have been widely investigated, few studies have examined their unique and interactive contribution to the parent-child relationship. The current study explores how parental behaviour (sensitivity and non-intrusiveness) and the use of parental language (exploring and control languages) correlate with parent-child dyadic mutuality. Specifically, we investigated the following questions: (1) 'Is parental language associated with parent-child dyadic mutuality above and beyond parental behaviour?' (2) 'Does parental language moderate the links between parental behaviour and the parent-child dyadic mutuality?' (3) 'Do these differences vary between mothers and fathers?' The sample included 65 children (M age  = 1.97 years, SD = 0.86) and their parents. We observed parental behaviour, parent-child dyadic mutuality, and the type of parental language used during videotaped in-home observations. The results indicated that parental language and behaviours are distinct components of the parent-child interaction. Parents who used higher levels of exploring language showed higher levels of parent-child dyadic mutuality, even when accounting for parental behaviour. Use of controlling language, however, was not found to be related to the parent-child dyadic mutuality. Different moderation models were found for mothers and fathers. These results highlight the need to distinguish parental language and behaviour when assessing their contribution to the parent-child relationship. © 2016 The British Psychological Society.

  10. Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J. P.

    2018-01-01

    Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…

  11. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available Computer Science 81 ( 2016 ) 128 – 135 5th Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...

  12. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  13. Skype me! Socially contingent interactions help toddlers learn language.

    Science.gov (United States)

    Roseberry, Sarah; Hirsh-Pasek, Kathy; Golinkoff, Roberta M

    2014-01-01

    Language learning takes place in the context of social interactions, yet the mechanisms that render social interactions useful for learning language remain unclear. This study focuses on whether social contingency might support word learning. Toddlers aged 24-30 months (N = 36) were exposed to novel verbs in one of three conditions: live interaction training, socially contingent video training over video chat, and noncontingent video training (yoked video). Results suggest that children only learned novel verbs in socially contingent interactions (live interactions and video chat). This study highlights the importance of social contingency in interactions for language learning and informs the literature on learning through screen media as the first study to examine word learning through video chat technology. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  14. Skype me! Socially Contingent Interactions Help Toddlers Learn Language

    Science.gov (United States)

    Roseberry, Sarah; Hirsh-Pasek, Kathy; Golinkoff, Roberta Michnick

    2013-01-01

    Language learning takes place in the context of social interactions, yet the mechanisms that render social interactions useful for learning language remain unclear. This paper focuses on whether social contingency might support word learning. Toddlers aged 24- to 30-months (N=36) were exposed to novel verbs in one of three conditions: live interaction training, socially contingent video training over video chat, and non-contingent video training (yoked video). Results suggest that children only learned novel verbs in socially contingent interactions (live interactions and video chat). The current study highlights the importance of social contingency in interactions for language learning and informs the literature on learning through screen media as the first study to examine word learning through video chat technology. PMID:24112079

  15. When novel sentences spoken or heard for the first time in the history of the universe are not enough: toward a dual-process model of language.

    Science.gov (United States)

    Van Lancker Sidtis, Diana

    2004-01-01

    Although interest in the language sciences was previously focused on newly created sentences, more recently much attention has turned to the importance of formulaic expressions in normal and disordered communication. Also referred to as formulaic expressions and made up of speech formulas, idioms, expletives, serial and memorized speech, slang, sayings, clichés, and conventional expressions, non-propositional language forms a large proportion of every speaker's competence, and may be differentially disturbed in neurological disorders. This review aims to examine non-propositional speech with respect to linguistic descriptions, psycholinguistic experiments, sociolinguistic studies, child language development, clinical language disorders, and neurological studies. Evidence from numerous sources reveals differentiated and specialized roles for novel and formulaic verbal functions, and suggests that generation of novel sentences and management of prefabricated expressions represent two legitimate and separable processes in language behaviour. A preliminary model of language behaviour that encompasses unitary and compositional properties and their integration in everyday language use is proposed. Integration and synchronizing of two disparate processes in language behaviour, formulaic and novel, characterizes normal communicative function and contributes to creativity in language. This dichotomy is supported by studies arising from other disciplines in neurology and psychology. Further studies are necessary to determine in what ways the various categories of formulaic expressions are related, and how these categories are processed by the brain. Better understanding of how non-propositional categories of speech are stored and processed in the brain can lead to better informed treatment strategies in language disorders.

  16. Language development in deaf children’s interactions with deaf and hearing adults. A Dutch longitudinal study

    NARCIS (Netherlands)

    Klatter-Folmer, H.A.K.; Hout, R.W.N.M. van; Kolen, E.; Verhoeven, L.T.W.

    2006-01-01

    The language development of two deaf girls and four deaf boys in Sign Language of the Netherlands (SLN) and spoken Dutch was investigated longitudinally. At the start, the mean age of the children was 3;5. All data were collected in video-recorded semistructured conversations between individual

  17. Young children's communication and literacy: a qualitative study of language in the inclusive preschool.

    Science.gov (United States)

    Kliewer, C

    1995-06-01

    Interactive and literacy-based language use of young children within the context of an inclusive preschool classroom was explored. An interpretivist framework and qualitative research methods, including participant observation, were used to examine and analyze language in five preschool classes that were composed of children with and without disabilities. Children's language use included spoken, written, signed, and typed. Results showed complex communicative and literacy language use on the part of young children outside conventional adult perspectives. Also, children who used expressive methods other than speech were often left out of the contexts where spoken language was richest and most complex.

  18. Interaction of Language Processing and Motor Skill in Children with Specific Language Impairment

    Science.gov (United States)

    DiDonato Brumbach, Andrea C.; Goffman, Lisa

    2014-01-01

    Purpose: To examine how language production interacts with speech motor and gross and fine motor skill in children with specific language impairment (SLI). Method: Eleven children with SLI and 12 age-matched peers (4-6 years) produced structurally primed sentences containing particles and prepositions. Utterances were analyzed for errors and for…

  19. Coaching Parents to Use Naturalistic Language and Communication Strategies

    Science.gov (United States)

    Akamoglu, Yusuf; Dinnebeil, Laurie

    2017-01-01

    Naturalistic language and communication strategies (i.e., naturalistic teaching strategies) refer to practices that are used to promote the child's language and communication skills either through verbal (e.g., spoken words) or nonverbal (e.g., gestures, signs) interactions between an adult (e.g., parent, teacher) and a child. Use of naturalistic…

  20. The role of foreign and indigenous languages in primary schools ...

    African Journals Online (AJOL)

    This article investigates the use of English and other African languages in Kenyan primary schools. English is a .... For a long time, the issue of the medium of instruction, in especially primary schools, has persisted in spite of .... mother tongue, they use this language for spoken classroom interaction in order to bring about.

  1. Language and Cognition Interaction Neural Mechanisms

    Science.gov (United States)

    2011-06-01

    resolution of processes in the brain, combined with magnetoencephalography (MEG), measurements of the magnetic field next to head , to provide a high...humans,” Anatomy and Embryology , vol. 210, no. 5-6, pp. 419– 421, 2005. [88] G. Rizzolatti and M. A. Arbib, “Language within our grasp,” Trends in

  2. Design of Feedback in Interactive Multimedia Language Learning Environments

    Directory of Open Access Journals (Sweden)

    Vehbi Türel

    2012-01-01

    Full Text Available In interactive multimedia environments, different digital elements (i. e. video, audio, visuals, text, animations, graphics and glossary can be combined and delivered on the same digital computer screen (TDM 1997: 151, CCED 1987, Brett 1998: 81, Stenton 1998: 11, Mangiafico 1996: 46. This also enables effectively provision and presentation of feedback in pedagogically more efficient ways, which meets not only the requirement of different teaching and learning theories, but also the needs of language learners who vary in their learning-style preferences (Robinson 1991: 156, Peter 1994: 157f.. This study aims to bring out the pedagogical and design principles that might help us to more effectively design and customise feedback in interactive multimedia language learning environments. While so doing, some examples of thought out and customized computerised feedback from an interactive multimedia language learning environment, which were designed and created by the author of this study and were also used for language learning purposes, will be shown.

  3. Interaction in a Blended Environment for English Language Learning

    Science.gov (United States)

    Romero Archila, Yuranny Marcela

    2014-01-01

    The purpose of this research was to identify the types of interaction that emerged not only in a Virtual Learning Environment (VLE) but also in face-to-face settings. The study also assessed the impact of the different kinds of interactions in terms of language learning. This is a qualitative case study that took place in a private Colombian…

  4. The interactional significance of formulas in autistic language.

    Science.gov (United States)

    Dobbinson, Sushie; Perkins, Mick; Boucher, Jill

    2003-01-01

    The phenomenon of echolalia in autistic language is well documented. Whilst much early research dismissed echolalia as merely an indicator of cognitive limitation, later work identified particular discourse functions of echolalic utterances. The work reported here extends the study of the interactional significance of echolalia to formulaic utterances. Audio and video recordings of conversations between the first author and two research participants were transcribed and analysed according to a Conversation Analysis framework and a multi-layered linguistic framework. Formulaic language was found to have predictable interactional significance within the language of an individual with autism, and the generic phenomenon of formulaicity in company with predictable discourse function was seen to hold across the research participants, regardless of cognitive ability. The implications of formulaicity in autistic language for acquisition and processing mechanisms are discussed.

  5. Sign language: an international handbook

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Woll, B.

    2012-01-01

    Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of

  6. Fourth International Workshop on Spoken Dialog Systems

    CERN Document Server

    Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice

    2014-01-01

    These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.

  7. SIGMA, a new language for interactive array-oriented computing

    International Nuclear Information System (INIS)

    Hagedorn, R.; Reinfelds, J.; Vandoni, C.; Hove, L. van.

    1978-01-01

    A description is given of the principles and the main facilities of SIGMA (System for Interactive Graphical Mathematical Applications), a programming language for scientific computing whose major characteristics are: automatic handling of multi-dimensional rectangular arrays as basic data units, interactive operation of the system, and graphical display facilities. After introducing the basic concepts and features of the language, it describes in some detail the methods and operators for the automatic handling of arrays and for their graphical display, the procedures for construction of programs by users, and other facilities of the system. The report is a new version of CERN 73-5. (Auth.)

  8. The embodied turn in research on language and social interaction

    DEFF Research Database (Denmark)

    Nevile, Maurice

    2015-01-01

    I use the term the embodied turn to mean the point when interest in the body became established among researchers on language and social interaction, exploiting the greater ease of video-recording. This review paper tracks the growth of "embodiment" in over 400 papers published in Research...... on Language and Social Interaction from 1987-2013. I consider closely two areas where analysts have confronted challenges, and how they have responded: settling on precise and analytically helpful terminology for the body; and transcribing and representing the body, particularly its temporality and manner....

  9. The impact of law and language as interactive patterns

    Directory of Open Access Journals (Sweden)

    Marina Kaishi

    2016-07-01

    Full Text Available Every country has adopted a certain law pattern. This has an impact on the language expression and the relevant adopted terminology. It can be tracked by examining and describing the lexical choices and the use of featuring structures, which form parallelisms in similar systems. Before proceeding with their linguistic description, it is necessary to explain the differences that exist between Greek-, French-, German-, Albanian law systems. It will be evident that they have some points in common, but at the same time they differ at a great extent in the way of conceptualizing the system. I shall use the Constitution as the basic law and a safe reference point for an explicit comparison. Terminology plays an important role in explaining these systems. The law & language are interactive patterns. We already have a European legal language, but it is time for a more coherent European wide legal language. The linguistic matters have a direct contact with judicial cases. Inside EU the usage of different languages is one of the main obstacles of the integration process. Then again according to EU it creates a specific problem for the European judges, translators and interpreters. So in order to achieve a co-usage of the language we need to develop a curriculum, in order to use a coherent terminology and linguistic patterns. To put a standard for the law language, used in the EU, we should follow a legal harmonization that is achieved through harmonized terminology inside EU. The right usage of the language and its terminology should be understood as a standardization process. Also European Union policy is of great importance because it informs us about language policy and how to deal with it. At last we must know that EU consists of 450 million people from different cultures and backgrounds. In this sense it can be said that EU is truly a multilingual institution that reinforces the ideal of a single community with different languages and different

  10. Steering the conversation: A linguistic exploration of natural language interactions with a digital assistant during simulated driving.

    Science.gov (United States)

    Large, David R; Clark, Leigh; Quandt, Annie; Burnett, Gary; Skrypchuk, Lee

    2017-09-01

    Given the proliferation of 'intelligent' and 'socially-aware' digital assistants embodying everyday mobile technology - and the undeniable logic that utilising voice-activated controls and interfaces in cars reduces the visual and manual distraction of interacting with in-vehicle devices - it appears inevitable that next generation vehicles will be embodied by digital assistants and utilise spoken language as a method of interaction. From a design perspective, defining the language and interaction style that a digital driving assistant should adopt is contingent on the role that they play within the social fabric and context in which they are situated. We therefore conducted a qualitative, Wizard-of-Oz study to explore how drivers might interact linguistically with a natural language digital driving assistant. Twenty-five participants drove for 10 min in a medium-fidelity driving simulator while interacting with a state-of-the-art, high-functioning, conversational digital driving assistant. All exchanges were transcribed and analysed using recognised linguistic techniques, such as discourse and conversation analysis, normally reserved for interpersonal investigation. Language usage patterns demonstrate that interactions with the digital assistant were fundamentally social in nature, with participants affording the assistant equal social status and high-level cognitive processing capability. For example, participants were polite, actively controlled turn-taking during the conversation, and used back-channelling, fillers and hesitation, as they might in human communication. Furthermore, participants expected the digital assistant to understand and process complex requests mitigated with hedging words and expressions, and peppered with vague language and deictic references requiring shared contextual information and mutual understanding. Findings are presented in six themes which emerged during the analysis - formulating responses; turn-taking; back

  11. Communicative Interaction and Second Language Acquisition: An Inuit Example.

    Science.gov (United States)

    Crago, Martha B.

    1993-01-01

    The role of cultural context in the communicative interaction of young Inuit children, their caregivers, and their non-Inuit teachers was examined in a longitudinal ethnographic study conducted in two small communities of arctic Quebec. Focus was on discourse features of primary language socialization of Inuit families. (32 references) (Author/LB)

  12. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  13. Linguistic steganography on Twitter: hierarchical language modeling with manual interaction

    Science.gov (United States)

    Wilson, Alex; Blunsom, Phil; Ker, Andrew D.

    2014-02-01

    This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to de ne a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the speci c language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.

  14. Analysis of event-mode data with Interactive Data Language

    International Nuclear Information System (INIS)

    De Young, P.A.; Hilldore, B.B.; Kiessel, L.M.; Peaslee, G.F.

    2003-01-01

    We have developed an analysis package for event-mode data based on Interactive Data Language (IDL) from Research Systems Inc. This high-level language is high speed, array oriented, object oriented, and has extensive visual (multi-dimensional plotting) and mathematical functions. We have developed a general framework, written in IDL, for the analysis of a variety of experimental data that does not require significant customization for each analysis. Unlike many traditional analysis package, spectra and gates are applied after data are read and are easily changed as analysis proceeds without rereading the data. The events are not sequentially processed into predetermined arrays subject to predetermined gates

  15. Multilingual Interaction and Minority Languages: Proficiency and Language Practices in Education and Society

    Science.gov (United States)

    Gorter, Durk

    2015-01-01

    In this plenary speech I examine multilingual interaction in a number of European regions in which minority languages are being revitalized. Education is a crucial variable, but the wider society is equally significant. The context of revitalization is no longer bilingual but increasingly multilingual. I draw on the results of a long-running…

  16. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  17. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  18. The Impact of Early Social Interactions on Later Language Development in Spanish-English Bilingual Infants

    Science.gov (United States)

    Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.

    2017-01-01

    This study tested the impact of child-directed language input on language development in Spanish-English bilingual infants (N = 25, 11- and 14-month-olds from the Seattle metropolitan area), across languages and independently for each language, controlling for socioeconomic status. Language input was characterized by social interaction variables,…

  19. Interactions between working memory and language in young children with specific language impairment (SLI).

    Science.gov (United States)

    Vugs, Brigitte; Knoors, Harry; Cuperus, Juliane; Hendriks, Marc; Verhoeven, Ludo

    2016-01-01

    The underlying structure of working memory (WM) in young children with and without specific language impairment (SLI) was examined. The associations between the components of WM and the language abilities of young children with SLI were then analyzed. The Automated Working Memory Assessment and four linguistic tasks were administered to 58 children with SLI and 58 children without SLI, aged 4-5 years. The WM of the children was best represented by a model with four separate but interacting components of verbal storage, visuospatial storage, verbal central executive (CE), and visuospatial CE. The associations between the four components of WM did not differ significantly for the two groups of children. However, the individual components of WM showed varying associations with the language abilities of the children with SLI. The verbal CE component of WM was moderately to strongly associated with all the language abilities in children with SLI: receptive vocabulary, expressive vocabulary, verbal comprehension, and syntactic development. These results show verbal CE to be involved in a wide range of linguistic skills; the limited ability of young children with SLI to simultaneously store and process verbal information may constrain their acquisition of linguistic skills. Attention should thus be paid to the language problems of children with SLI, but also to the WM impairments that can contribute to their language problems.

  20. Spoken grammar awareness raising: Does it affect the listening ability of Iranian EFL learners?

    Directory of Open Access Journals (Sweden)

    Mojgan Rashtchi

    2011-12-01

    Full Text Available Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and lack core spoken language features. The aim of the present study was to explore the question whether awareness of spoken grammar features could affect learners’ comprehension of real-life conversations. To this end, 45 university students in two intact classes participated in a listening course employing corpus-based materials. The instruction of the spoken grammar features to the experimental group was done overtly through awareness raising tasks, whereas the control group, though exposed to the same materials, was not provided with such tasks for learning the features. The results of the independent samples t tests revealed that the learners in the experimental group comprehended everyday conversations much better than those in the control group. Additionally, the highly positive views of spoken grammar held by the learners, which was elicited by means of a retrospective questionnaire, were generally comparable to those reported in the literature.

  1. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  2. Lexicon Optimization for Dutch Speech Recognition in Spoken Document Retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  3. Lexicon optimization for Dutch speech recognition in spoken document retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.; Dalsgaard, P.; Lindberg, B.; Benner, H.

    2001-01-01

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  4. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  5. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  6. Query2Question: Translating Visualization Interaction into Natural Language.

    Science.gov (United States)

    Nafari, Maryam; Weaver, Chris

    2015-06-01

    Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.

  7. Socio-Pragmatic Problems in Foreign Language Teaching

    Directory of Open Access Journals (Sweden)

    İsmail ÇAKIR

    2006-10-01

    Full Text Available It is a fact that language is a means of communication for human beings. People who needto have social interaction should share the same language, beliefs, values etc., in a given society.It can be stated that when learning a foreign language, mastering only linguistic features of FLprobably does not ensure true spoken and written communication. This study aims to deal withsocio-pragmatic problems which the learners may be confront with while learning and using theforeign language. Particularly cultural and cultural values of the target language such as idioms,proverbs and metaphors and their role in foreign language teaching have been focused on.

  8. Gendered Teacher–Student Interactions in English Language Classrooms

    Directory of Open Access Journals (Sweden)

    Jaleh Hassaskhah

    2013-09-01

    Full Text Available Being and becoming is the ultimate objective of any educational enterprise, including language teaching. However, research results indicate seemingly unjustified differences between how females and males are treated by EFL (English as a Foreign Language teachers. The overall aim of this study is to illustrate, analyze, and discuss aspects of gender bias and gender awareness in teacher–student interaction in the Iranian college context. To this end, teacher–student interactions of 20 English teachers and 500 students were investigated from the perspective of gender theory. The data were obtained via classroom observations, a seating chart and the audio-recording of all classroom interactions during the study. The findings, obtained from the quantitative descriptive statistics and chi-square methods, as well as the qualitative analysis by way of open and selective coding, uncovered that there were significant differences in the quantity and quality of the interaction for females and males in almost all categories of interaction. The study also revealed teachers’ perception of “gender,” the problems they associate with gender, and the attitudes they have to gender issues. Apparently, while positive incentives are able to facilitate learner growth, the presence of any negative barrier such as gender bias is likely to hinder development. This has implications for teachers, and faculty members who favor healthy and gender-neutral educational climate.

  9. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  10. Language, interactivity and solution probing: repetition without repetition

    DEFF Research Database (Denmark)

    Cowley, Stephen; Nash, Luarina

    2013-01-01

    Recognition of the importance of autopoiesis to biological systems was crucial in building an alternative to the classic view of cognitive science. However, concepts like structural coupling and autonomy are not strong enough to throw light on language and human problem solving. The argument...... is presented though a case study where a person solves a problem and, in so doing relies on non-local aspects of the ecology as well as his observer's mental domain. Like Anthony Chemero we make links with ecological psychology to emphasize how embodiment draws on cultural resources as people concert thinking......, action and perception. We trace this to human interactivity or sense-saturated coordination that renders possible language and human forms of cognition: it links human sense-making to historical experience. People play roles with natural and cultural artifacts as they act, animate groups and live through...

  11. Interaction of language processing and motor skill in children with specific language impairment.

    Science.gov (United States)

    DiDonato Brumbach, Andrea C; Goffman, Lisa

    2014-02-01

    To examine how language production interacts with speech motor and gross and fine motor skill in children with specific language impairment (SLI). Eleven children with SLI and 12 age-matched peers (4-6 years) produced structurally primed sentences containing particles and prepositions. Utterances were analyzed for errors and for articulatory duration and variability. Standard measures of motor, language, and articulation skill were also obtained. Sentences containing particles, as compared with prepositions, were less likely to be produced in a priming task and were longer in duration, suggesting increased difficulty with this syntactic structure. Children with SLI demonstrated higher articulatory variability and poorer gross and fine motor skills compared with aged-matched controls. Articulatory variability was correlated with generalized gross and fine motor performance. Children with SLI show co-occurring speech motor and generalized motor deficits. Current theories do not fully account for the present findings, though the procedural deficit hypothesis provides a framework for interpreting overlap among language and motor domains.

  12. User-Centred Design for Chinese-Oriented Spoken English Learning System

    Science.gov (United States)

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  13. Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery

    Directory of Open Access Journals (Sweden)

    Andrea Hudáková

    2017-11-01

    Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.

  14. Language used in interaction during developmental science instruction

    Science.gov (United States)

    Avenia-Tapper, Brianna

    The coordination of theory and evidence is an important part of scientific practice. Developmental approaches to instruction, which make the relationship between the abstract and the concrete a central focus of students' learning activity, provide educators with a unique opportunity to strengthen students' coordination of theory and evidence. Therefore, developmental approaches may be a useful instructional response to documented science achievement gaps for linguistically diverse students. However, if we are to leverage the potential of developmental instruction to improve the science achievement of linguistically diverse students, we need more information on the intersection of developmental science instruction and linguistically diverse learning contexts. This manuscript style dissertation uses discourse analysis to investigate the language used in interaction during developmental teaching-learning in three linguistically diverse third grade classrooms. The first manuscript asks how language was used to construct ascension from the abstract to the concrete. The second manuscript asks how students' non-English home languages were useful (or not) for meeting the learning goals of the developmental instructional program. The third manuscript asks how students' interlocutors may influence student choice to use an important discourse practice--justification--during the developmental teaching-learning activity. All three manuscripts report findings relevant to the instructional decisions that teachers need to make when implementing developmental instruction in linguistically diverse contexts.

  15. Developing a corpus of spoken language variability

    Science.gov (United States)

    Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford

    2003-10-01

    We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.

  16. Interactive data language (IDL) for medical image processing

    International Nuclear Information System (INIS)

    Md Saion Salikin

    2002-01-01

    Interactive Data Language (IDL) is one of many softwares available in the market for medical image processing and analysis. IDL is a complete, structured language that can be used both interactively and to create sophisticated functions, procedures, and applications. It provides a suitable processing routines and display method which include animation, specification of colour table including 24-bit capability, 3-D visualization and many graphic operation. The important features of IDL for medical imaging are segmentation, visualization, quantification and pattern recognition. In visualization IDL is capable of allowing greater precision and flexibility when visualizing data. For example, IDL eliminates the limits on Number of Contour level. In term of data analysis, IDL is capable of handling complicated functions such as Fast Fourier Transform (FFT) function, Hough and Radon Transform function, Legendre Polynomial function, as well as simple functions such as Histogram function. In pattern recognition, pattern description is defined as points rather than pixels. With this functionality, it is easy to re-use the same pattern on more than one destination device (even if the destinations have varying resolution). In other words it have the ability to specify values in points. However there are a few disadvantages of using IDL. Licensing is by dongkel key and limited licences hence limited access to potential IDL users. A few examples are shown to demonstrate the capabilities of IDL in carrying out its function for medical image processing. (Author)

  17. DIFFERENCES BETWEEN AMERICAN SIGN LANGUAGE (ASL AND BRITISH SIGN LANGUAGE (BSL

    Directory of Open Access Journals (Sweden)

    Zora JACHOVA

    2008-06-01

    Full Text Available In the communication of deaf people between them­selves and hearing people there are three ba­sic as­pects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The ges­ture and the move­ments pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them pho­nemes. He created three main phoneme catego­ries: hand position, location and movement.Sign languages as spoken languages have back­ground from the distant past. They developed par­allel with the development of spoken language and undertook many historical changes. Therefore, to­day they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.

  18. Thoughts about Central Andean Formative Languages and Societies

    OpenAIRE

    Kaulicke, Peter

    2012-01-01

    This paper deals with the general problem of the Formative Period and presents a proposal for subdivision based upon characterizations of material cultures and their distributions as interaction spheres and traditions. These reflect significant changes that may be related to changes in the mechanisms of language dispersal. It hypothesizes that a pre-protomochica was spoken in northern Perú; that multilingualism prevailed at the site of Chavín site; and that different languages existed in the ...

  19. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  20. Classroom Interaction in Teaching English as Foreign Language at Lower Secondary Schools in Indonesia

    Directory of Open Access Journals (Sweden)

    Hanna Sundari

    2017-12-01

    Full Text Available The aim of this study was to develop a deep understanding of interaction in language classroom in foreign language context. Interviews, as major instrument, to twenty experienced English language teachers from eight lower secondary schools (SMP were conducted in Jakarta, completed by focus group discussions and class observation/recordings. The gathered data was analyzed according to systematic design of grounded theory analysis method through 3-phase coding. A model of classroom interaction was formulated defining several dimensions in interaction. Classroom interaction can be more comprehended under the background of interrelated factors: interaction practices, teacher and student factors, learning objectives, materials, classroom contexts, and outer contexts surrounding the interaction practices. The developed model of interaction for language classroom is notably to give deep descriptions on how interaction substantially occurs and what factors affect it in foreign language classrooms at lower secondary schools from teachers’ perspectives.

  1. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  2. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  3. Embodying multilingual interaction

    DEFF Research Database (Denmark)

    Hazel, Spencer; Mortensen, Janus

    this linguistic diversity is managed in situ by participants engaged in dialogue with one another, and what it is used for in these transient multilingual communities. This paper presents CA-based micro-ethnographic analyses of language choice in an informal social setting – a kitchen – of an international study...... literature on language choice in interaction, our findings emphasize that analyses of language choice in multilingual settings need to take into account social actions beyond the words that are spoken. We show that facial, spatial and postural configurations, gaze orientation and gestures as well as prosodic...... in the particular community of practice that we are investigating. Reference Hazel, Spencer, and Janus Mortensen. forthcoming. Kitchen talk: Exploring linguistic practices in liminal institutional interactions in a multilingual university setting. in Language Alternation, Language Choice, and Language Encounter...

  4. Episodic grammar: a computational model of the interaction between episodic and semantic memory in language processing

    NARCIS (Netherlands)

    Borensztajn, G.; Zuidema, W.; Carlson, L.; Hoelscher, C.; Shipley, T.F.

    2011-01-01

    We present a model of the interaction of semantic and episodic memory in language processing. Our work shows how language processing can be understood in terms of memory retrieval. We point out that the perceived dichotomy between rule-based versus exemplar-based language modelling can be

  5. English Language Teacher Educator Interactional Styles: Heterogeneity and Homogeneity in the ELTE Classroom

    Science.gov (United States)

    Lucero, Edgar; Scalante-Morales, Jeesica

    2018-01-01

    This article presents a research study on the interactional styles of teacher educators in the English language teacher education classroom. Two research methodologies, ethnomethodological conversation analysis and self-evaluation of teacher talk were applied to analyze 34 content- and language-based classes of nine English language teacher…

  6. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  7. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  8. Serbian heritage language schools in the Netherlands through the eyes of the parents

    NARCIS (Netherlands)

    Palmen, Andrej

    It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the

  9. Bilingualism alters brain functional connectivity between "control" regions and "language" regions: Evidence from bimodal bilinguals.

    Science.gov (United States)

    Li, Le; Abutalebi, Jubin; Zou, Lijuan; Yan, Xin; Liu, Lanfang; Feng, Xiaoxia; Wang, Ruiming; Guo, Taomei; Ding, Guosheng

    2015-05-01

    Previous neuroimaging studies have revealed that bilingualism induces both structural and functional neuroplasticity in the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (LCN), both of which are associated with cognitive control. Since these "control" regions should work together with other language regions during language processing, we hypothesized that bilingualism may also alter the functional interaction between the dACC/LCN and language regions. Here we tested this hypothesis by exploring the functional connectivity (FC) in bimodal bilinguals and monolinguals using functional MRI when they either performed a picture naming task with spoken language or were in resting state. We found that for bimodal bilinguals who use spoken and sign languages, the FC of the dACC with regions involved in spoken language (e.g. the left superior temporal gyrus) was stronger in performing the task, but weaker in the resting state as compared to monolinguals. For the LCN, its intrinsic FC with sign language regions including the left inferior temporo-occipital part and right inferior and superior parietal lobules was increased in the bilinguals. These results demonstrate that bilingual experience may alter the brain functional interaction between "control" regions and "language" regions. For different control regions, the FC alters in different ways. The findings also deepen our understanding of the functional roles of the dACC and LCN in language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Teaching and Learning Foreign Languages via System of “Voice over internet protocol” and Language Interactions Case Study: Skype

    Directory of Open Access Journals (Sweden)

    Wazira Ali Abdul Wahid

    2015-04-01

    Full Text Available This issue expresses a research study based on the online interactions of English teaching specially conversation through utilizing VOIP (Voice over Internet Protocol and cosmopolitan online theme. Data has been achieved by interviews. Simplifiers indicate how oral tasks require to be planned upon to facilitate engagement models propitious to language interactions and learning. Collected proficiencies and feature presumably change it to be the best workout which is emanated over two analyzed interviews. Several indications according to utilizing vocal conferencing aim to expand the oral performance in a foreign language interaction. Keywords: VOIP, CFs, EFL, Skype

  11. Interaction between lexical and grammatical language systems in the brain

    Science.gov (United States)

    Ardila, Alfredo

    2012-06-01

    This review concentrates on two different language dimensions: lexical/semantic and grammatical. This distinction between a lexical/semantic system and a grammatical system is well known in linguistics, but in cognitive neurosciences it has been obscured by the assumption that there are several forms of language disturbances associated with focal brain damage and hence language includes a diversity of functions (phoneme discrimination, lexical memory, grammar, repetition, language initiation ability, etc.), each one associated with the activity of a specific brain area. The clinical observation of patients with cerebral pathology shows that there are indeed only two different forms of language disturbances (disturbances in the lexical/semantic system and disturbances in the grammatical system); these two language dimensions are supported by different brain areas (temporal and frontal) in the left hemisphere. Furthermore, these two aspects of the language are developed at different ages during child's language acquisition, and they probably appeared at different historical moments during human evolution. Mechanisms of learning are different for both language systems: whereas the lexical/semantic knowledge is based in a declarative memory, grammatical knowledge corresponds to a procedural type of memory. Recognizing these two language dimensions can be crucial in understanding language evolution and human cognition.

  12. GAIML: A New Language for Verbal and Graphical Interaction in Chatbots

    OpenAIRE

    Roberto Pirrone; Giuseppe Russo; Vincenzo Cannella; Daniele Peri

    2008-01-01

    Natural and intuitive interaction between users and complex systems is a crucial research topic in human-computer interaction. A major direction is the definition and implementation of systems with natural language understanding capabilities. The interaction in natural language is often performed by means of systems called chatbots. A chatbot is a conversational agent with a proper knowledge base able to interact with users. Chatbots appearance can be very sophisticated with 3D avatars and sp...

  13. Assessing the Effectiveness of Parent-Child Interaction Therapy with Language Delayed Children: A Clinical Investigation

    Science.gov (United States)

    Falkus, Gila; Tilley, Ciara; Thomas, Catherine; Hockey, Hannah; Kennedy, Anna; Arnold, Tina; Thorburn, Blair; Jones, Katie; Patel, Bhavika; Pimenta, Claire; Shah, Rena; Tweedie, Fiona; O'Brien, Felicity; Leahy, Ruth; Pring, Tim

    2016-01-01

    Parent-child interaction therapy (PCIT) is widely used by speech and language therapists to improve the interactions between children with delayed language development and their parents/carers. Despite favourable reports of the therapy from clinicians, little evidence of its effectiveness is available. We investigated the effects of PCIT as…

  14. Interactive computing in BASIC an introduction to interactive computing and a practical course in the BASIC language

    CERN Document Server

    Sanderson, Peter C

    1973-01-01

    Interactive Computing in BASIC: An Introduction to Interactive Computing and a Practical Course in the BASIC Language provides a general introduction to the principles of interactive computing and a comprehensive practical guide to the programming language Beginners All-purpose Symbolic Instruction Code (BASIC). The book starts by providing an introduction to computers and discussing the aspects of terminal usage, programming languages, and the stages in writing and testing a program. The text then discusses BASIC with regard to methods in writing simple arithmetical programs, control stateme

  15. Assessing Group Interaction with Social Language Network Analysis

    Science.gov (United States)

    Scholand, Andrew J.; Tausczik, Yla R.; Pennebaker, James W.

    In this paper we discuss a new methodology, social language network analysis (SLNA), that combines tools from social language processing and network analysis to assess socially situated working relationships within a group. Specifically, SLNA aims to identify and characterize the nature of working relationships by processing artifacts generated with computer-mediated communication systems, such as instant message texts or emails. Because social language processing is able to identify psychological, social, and emotional processes that individuals are not able to fully mask, social language network analysis can clarify and highlight complex interdependencies between group members, even when these relationships are latent or unrecognized.

  16. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  17. Beyond mechanistic interaction: Value-based constraints on meaning in language

    Directory of Open Access Journals (Sweden)

    Joanna eRączaszek-Leonardi

    2015-10-01

    Full Text Available According to situated, embodied, distributed approaches to cognition, language is a crucial means for structuring social interactions. Recent approaches that emphasize the coordinative function of language treat language as a system of replicable constraints that work both on individuals and on interactions. In this paper we argue that the integration of replicable constraints approach to language with the ecological view on values allows for a deeper insight into processes of meaning creation in interaction. Such synthesis of these frameworks draws attention to important sources of structuring interactions beyond the sheer efficiency of a collective system in its current task situation. Most importantly the workings of linguistic constraints will be shown as embedded in more general fields of values, which are realized on multiple time-scales. Since the ontogenetic timescale offers a convenient window into a process of the emergence of linguistic constraints, we present illustrations of concrete mechanisms through which values may become embodied in language use in development.

  18. INTERACTION AND INTERACTIVITY IN BLOGS OF PORTUGUESE LANGUAGE TEACHING UNDER THE PERSPECTIVE OF MULTILITERACIES

    Directory of Open Access Journals (Sweden)

    Clara Dornelles

    2015-12-01

    Full Text Available With the entry of information and communication technologies (ICT in the school, there is the possibility of making use of technological tools that can contribute to multiliteracies. In this work, we assume a qualitative and interpretive perspective to investigate how teachers have been using blogs for teaching Portuguese as well as how the students’ participation occurs in that digital context. Through the analysis of blogs recently produce by primary and high school teachers, we reflect and discuss about the relationship between school literacy and multiliteracies (SIGNORINI, 2012, based on the concepts of interaction, participation (GOFFMAN, 1998; OLIVEIRA; LUCENA FILHO, 2006 and interactivity (SANTAELLA, 2008. The results indicate that many school literacy practices are transposed to the blogs of Portuguese Language teaching, and that students’ participation only occurs when the teacher conduces to the construction of a space that is predisposed to multiliteracies, in which everyone plays important roles.

  19. Digital gaming and second language development: Japanese learners interactions in a MMORPG

    OpenAIRE

    Mark Peterson

    2011-01-01

    Massively multiplayer online role-playing games (MMORPGs) are identified as valuable arenas for language learning, as they provide access to contexts and types of interaction that are held to be beneficial in second language acquisition research. This paper will describe the development and key features of these games, and explore claims made regarding their value as environments for language learning. The discussion will then examine current research. This is followed by an analysis of t...

  20. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    Science.gov (United States)

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980

  1. An Investigation of Pre-Service English Language Teacher Attitudes towards Varieties of English in Interaction

    Science.gov (United States)

    Litzenberg, Jason

    2013-01-01

    English has become the default language of global communication, and users around the world are adapting the traditional standards of grammar and interaction. It is imperative that teachers of English keep pace with these changing conceptualizations of the language as well as the changing expectations of its users so that they can best prepare…

  2. Reading Intervention Using Interactive Metronome in Children with Language and Reading Impairment: A Preliminary Investigation

    Science.gov (United States)

    Ritter, Michaela; Colson, Karen A.; Park, Jungjun

    2013-01-01

    This exploratory study examined the effects of Interactive Metronome (IM) when integrated with a traditional language and reading intervention on reading achievement. Forty-nine school-age children with language and reading impairments were assigned randomly to either an experimental group who received the IM treatment or to a control group who…

  3. Specific Language Impairment - Evidence for the division of labor and the interaction between grammar and pragmatics

    NARCIS (Netherlands)

    Schaeffer, J.

    2012-01-01

    This study analyzes grammatical and pragmatic data of English and Dutch acquiring children with SLI, and compares them to the language of typically developing children, in order to gain more insight in the organization of language, in particular, the dissociation and interaction of grammar and

  4. Video-Based Interaction, Negotiation for Comprehensibility, and Second Language Speech Learning: A Longitudinal Study

    Science.gov (United States)

    Saito, Kazuya; Akiyama, Yuka

    2017-01-01

    This study examined the impact of video-based conversational interaction on the longitudinal development (one academic semester) of second language production by college-level Japanese English-as-a-foreign-language learners. Students in the experimental group engaged in weekly dyadic conversation exchanges with native speakers in the United States…

  5. Teaching materials on language endangerment, an interactive e-learning module on the internet

    NARCIS (Netherlands)

    Odé, C.; de Graaf, T.; Ostler, N.; Salverda, R.

    2008-01-01

    In 2007, in the framework of the NWO (Netherlands Organisation for Scientific Research) Research Programme on Endangered Languages, an interactive e-learning module has been developed on language endangerment. The module for students in secondary schools (15-18 years of age) is available free of

  6. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  7. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Science.gov (United States)

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633

  8. Beyond mechanistic interaction: value-based constraints on meaning in language.

    Science.gov (United States)

    Rączaszek-Leonardi, Joanna; Nomikou, Iris

    2015-01-01

    According to situated, embodied, and distributed approaches to cognition, language is a crucial means for structuring social interactions. Recent approaches that emphasize this coordinative function treat language as a system of replicable constraints on individual and interactive dynamics. In this paper, we argue that the integration of the replicable-constraints approach to language with the ecological view on values allows for a deeper insight into processes of meaning creation in interaction. Such a synthesis of these frameworks draws attention to important sources of structuring interactions beyond the sheer efficiency of a collective system in its current task situation. Most importantly, the workings of linguistic constraints will be shown as embedded in more general fields of values, which are realized on multiple timescales. Because the ontogenetic timescale offers a convenient window into the emergence of linguistic constraints, we present illustrations of concrete mechanisms through which values may become embodied in language use in development.

  9. Digital Language Death

    Science.gov (United States)

    Kornai, András

    2013-01-01

    Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559

  10. Digital language death.

    Directory of Open Access Journals (Sweden)

    András Kornai

    Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.

  11. Rapid modulation of spoken word recognition by visual primes.

    Science.gov (United States)

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  12. Early human communication helps in understanding language evolution.

    Science.gov (United States)

    Lenti Boero, Daniela

    2014-12-01

    Building a theory on extant species, as Ackermann et al. do, is a useful contribution to the field of language evolution. Here, I add another living model that might be of interest: human language ontogeny in the first year of life. A better knowledge of this phase might help in understanding two more topics among the "several building blocks of a comprehensive theory of the evolution of spoken language" indicated in their conclusion by Ackermann et al., that is, the foundation of the co-evolution of linguistic motor skills with the auditory skills underlying speech perception, and the possible phylogenetic interactions of protospeech production with referential capabilities.

  13. Language Development in Children with Language Disorders: An Introduction to Skinner's Verbal Behavior and the Techniques for Initial Language Acquisition

    Science.gov (United States)

    Casey, Laura Baylot; Bicard, David F.

    2009-01-01

    Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…

  14. GAIML: A New Language for Verbal and Graphical Interaction in Chatbots

    Directory of Open Access Journals (Sweden)

    Roberto Pirrone

    2008-01-01

    Full Text Available Natural and intuitive interaction between users and complex systems is a crucial research topic in human-computer interaction. A major direction is the definition and implementation of systems with natural language understanding capabilities. The interaction in natural language is often performed by means of systems called chatbots. A chatbot is a conversational agent with a proper knowledge base able to interact with users. Chatbots appearance can be very sophisticated with 3D avatars and speech processing modules. However the interaction between the system and the user is only performed through textual areas for inputs and replies. An interaction able to add to natural language also graphical widgets could be more effective. On the other side, a graphical interaction involving also the natural language can increase the comfort of the user instead of using only graphical widgets. In many applications multi-modal communication must be preferred when the user and the system have a tight and complex interaction. Typical examples are cultural heritages applications (intelligent museum guides, picture browsing or systems providing the user with integrated information taken from different and heterogenous sources as in the case of the iGoogle™ interface. We propose to mix the two modalities (verbal and graphical to build systems with a reconfigurable interface, which is able to change with respect to the particular application context. The result of this proposal is the Graphical Artificial Intelligence Markup Language (GAIML an extension of AIML allowing merging both interaction modalities. In this context a suitable chatbot system called Graphbot is presented to support this language. With this language is possible to define personalized interface patterns that are the most suitable ones in relation to the data types exchanged between the user and the system according to the context of the dialogue.

  15. Multiclausal Utterances Aren't Just for Big Kids: A Framework for Analysis of Complex Syntax Production in Spoken Language of Preschool- and Early School-Age Children

    Science.gov (United States)

    Arndt, Karen Barako; Schuele, C. Melanie

    2013-01-01

    Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…

  16. The Effect of Interactivity with a Music Video Game on Second Language Vocabulary Recall

    Directory of Open Access Journals (Sweden)

    Jonathan DeHaan

    2010-06-01

    Full Text Available Video games are potential sources of second language input; however, the medium’s fundamental characteristic, interactivity, has not been thoroughly examined in terms of its effect on learning outcomes. This experimental study investigated to what degree, if at all, video game interactivity would help or hinder the noticing and recall of second language vocabulary. Eighty randomly-selected Japanese university undergraduates were paired based on similar English language and game proficiencies. One subject played an English-language music video game for 20 minutes while the paired subject watched the game simultaneously on another monitor. Following gameplay, a vocabulary recall test, a cognitive load measure, an experience questionnaire, and a two-week delayed vocabulary recall test were administered. Results were analyzed using paired samples t-tests and various analyses of variance. Both the players and the watchers of the video game recalled vocabulary from the game, but the players recalled significantly less vocabulary than the watchers. This seems to be a result of the extraneous cognitive load induced by the interactivity of the game; the players perceived the game and its language to be significantly more difficult than the watchers did. Players also reported difficulty simultaneously attending to gameplay and vocabulary. Both players and watchers forgot significant amounts of vocabulary over the course of the study. We relate these findings to theories and studies of vocabulary acquisition and video game-based language learning, and then suggest implications for language teaching and learning with interactive multimedia.

  17. Dynamic Adaptation in Child-Adult Language Interaction

    Science.gov (United States)

    van Dijk, Marijn; van Geert, Paul; Korecky-Kröll, Katharina; Maillochon, Isabelle; Laaha, Sabine; Dressler, Wolfgang U.; Bassano, Dominique

    2013-01-01

    When speaking to young children, adults adapt their language to that of the child. In this article, we suggest that this child-directed speech (CDS) is the result of a transactional process of dynamic adaptation between the child and the adult. The study compares developmental trajectories of three children to those of the CDS of their caregivers.…

  18. Interactions of Identity: Indochinese Refugee Youths, Language Use, and Schooling.

    Science.gov (United States)

    Kuwahara, Yuri

    A study examined the roles of language and school in the lives of a group of five Indochinese friends, aged 10-12, in the same sixth-grade class. Two were born in the United States; three were born in Thai refugee camps. The ways in which the subjects defined themselves in relation to other students, particularly other Asian students, and to each…

  19. Technology-enhanced instruction in learning world languages: The Middlebury interactive learning program

    Directory of Open Access Journals (Sweden)

    Cynthia Lake

    2015-03-01

    Full Text Available Middlebury Interactive Language (MIL programs are designed to teach world language courses using blended and online learning for students in kindergarten through grade 12. Middlebury Interactive courses start with fundamental building blocks in four key areas of world-language study: listening comprehension, speaking, reading, and writing. As students progress through the course levels, they deepen their understanding of the target language, continuing to focus on the three modes of communication: interpretive, interpersonal, and presentational. The extensive use of authentic materials (video, audio, images, or texts is intended to provide a contextualized and interactive presentation of the vocabulary and the linguistic structures. In the present paper, we describe the MIL program and the results of a mixed-methods survey and case-study evaluation of its implementation in a broad sample of schools. Technology application is examined with regard to MIL instructional strategies and the present evaluation approach relative to those employed in the literature.

  20. Word level language identification in online multilingual communication

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Dogruoz, A. Seza

    2013-01-01

    Multilingual speakers switch between languages in online and spoken communication. Analyses of large scale multilingual data require automatic language identification at the word level. For our experiments with multilingual online discussions, we first tag the language of individual words using

  1. Monitoring the Performance of Human and Automated Scores for Spoken Responses

    Science.gov (United States)

    Wang, Zhen; Zechner, Klaus; Sun, Yu

    2018-01-01

    As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…

  2. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  3. Between Syntax and Pragmatics: The Causal Conjunction Protože in Spoken and Written Czech

    Czech Academy of Sciences Publication Activity Database

    Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra

    -, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf

  4. Language, Space, Power: Reflections on Linguistic and Spatial Turns in Urban Research

    DEFF Research Database (Denmark)

    Vuolteenaho, Jani; Ameel, Lieven; Newby, Andrew

    2012-01-01

    to conceptualise the power-embeddedness of urban spaces, processes and identities. More recently, however, the ramifications of the linguistic turn across urban research have proliferated as a result of approaches in which specific place-bound language practices and language-based representations about cities have......) and thematic interests (from place naming to interactional uses of spoken language) that have been significant channels in re-directing urban scholars’ attention to the concrete workings of language. As regards the spatial turn, we highlight the relevance of the connectivity-, territoriality-, attachment...

  5. Visual Sonority Modulates Infants' Attraction to Sign Language

    Science.gov (United States)

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  6. Bimodal Bilingual Language Development of Hearing Children of Deaf Parents

    Science.gov (United States)

    Hofmann, Kristin; Chilla, Solveig

    2015-01-01

    Adopting a bimodal bilingual language acquisition model, this qualitative case study is the first in Germany to investigate the spoken and sign language development of hearing children of deaf adults (codas). The spoken language competence of six codas within the age range of 3;10 to 6;4 is assessed by a series of standardised tests (SETK 3-5,…

  7. Music and Language Syntax Interact in Broca's Area: An fMRI Study.

    Directory of Open Access Journals (Sweden)

    Richard Kunert

    Full Text Available Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony and language syntax interact in Broca's area in the left inferior frontal gyrus (without leading to music and language main effects. A language main effect in Broca's area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1 general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2 error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains-music and language-might draw on the same high level syntactic integration resources in Broca's area.

  8. IDAL: an interactive analysis language for high energy physics

    International Nuclear Information System (INIS)

    Burnett, T.H.

    1990-01-01

    The SLAC e + e - experiment SLD has adopted a unique off-line software environment, IDA. It provides a command processor shell for all code, from reconstruction and Monte Carlo production to user DST physics analysis. An essential component is an incrementally-compiled language, IDAL. IDAL allows symbolic access to SLD data structures, and supports special loop constructs to allow examination of all banks of a given type. IDAL also recognizes statements that simultaneously define histograms and generate code to fill them

  9. Lexical and articulatory interactions in children’s language production

    Science.gov (United States)

    Heisler, Lori; Goffman, Lisa; Younger, Barbara

    2009-01-01

    Traditional models of adult language processing and production include two levels of representation: lexical and sublexical. The current study examines the influence of the inclusion of a lexical representation (i.e., a visual referent and/or object function) on the stability of articulation as well as on phonetic accuracy and variability in typically developing children and children with specific language impairment (SLI). A word learning paradigm was developed so that we could compare children’s production with and without lexical representation. The variability and accuracy of productions were examined using speech kinematics as well as traditional phonetic accuracy measures. Results showed that phonetic forms with lexical representation were produced with more articulatory stability than phonetic forms without lexical representation. Using more traditional transcription measures, a paired lexical referent generally did not influence segmental accuracy (percent consonant correct and type token ratio). These results suggest that lexical and articulatory levels of representation are not completely independent. Implications for models of language production are discussed. PMID:20712738

  10. Lexical and articulatory interactions in children's language production.

    Science.gov (United States)

    Heisler, Lori; Goffman, Lisa; Younger, Barbara

    2010-09-01

    Traditional models of adult language processing and production include two levels of representation: lexical and sublexical. The current study examines the influence of the inclusion of a lexical representation (i.e. a visual referent and/or object function) on the stability of articulation as well as on phonetic accuracy and variability in typically developing children and children with specific language impairment (SLI). A word learning paradigm was developed so that we could compare children's production with and without lexical representation. The variability and accuracy of productions were examined using speech kinematics as well as traditional phonetic accuracy measures. Results showed that phonetic forms with lexical representation were produced with more articulatory stability than phonetic forms without lexical representation. Using more traditional transcription measures, a paired lexical referent generally did not influence segmental accuracy (percent consonant correct and type token ratio). These results suggest that lexical and articulatory levels of representation are not completely independent. Implications for models of language production are discussed.

  11. 125 The Fading Phase of Igbo Language and Culture: Path to its ...

    African Journals Online (AJOL)

    Tracie1

    favour of foreign language (and culture). They also ... native language, and children are unable to learn a language not spoken ... shielding them off their mother tongue”. ..... the effect endangered language has on the existence of the owners.

  12. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.

    2014-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  13. Stability in Chinese and Malay heritage languages as a source of divergence

    NARCIS (Netherlands)

    Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.

    2015-01-01

    This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often

  14. Player-Game Interaction: An Ecological Analysis of Foreign Language Gameplay Activities

    Science.gov (United States)

    Ibrahim, Karim

    2018-01-01

    This article describes how the literature on game-based foreign language (FL) learning has demonstrated that player-game interactions have a strong potential for FL learning. However, little is known about the fine-grained dynamics of these interactions, or how they could facilitate FL learning. To address this gap, the researcher conducted a…

  15. Developing the Second Language Writing Process through Social Media-Based Interaction Tasks

    Science.gov (United States)

    Gómez, Julian Esteban Zapata

    2015-01-01

    This paper depicts the results from a qualitative research study focused on finding out the effect of interaction through social media on the development of second language learners' written production from a private school in Medellín, Antioquia, Colombia. The study was framed within concepts such as "social interaction," "digital…

  16. Relationship between social interaction bids and language in late talking children.

    Science.gov (United States)

    Vuksanovic, Jasmina R

    2015-03-28

    The aim of this paper is to explore the relationship between language development and the frequency of social interaction (SI) behaviours during language acquisition in late-talking (LT) children who exhibit delays in expressive vocabulary development but have age-appropriate cognitive skills. The research consists of a longitudinal study with a first test followed by two re-tests 5 months apart, in which LT children were compared to 5-months-younger typically-developing (TD) children. Data showed that LT children performed significantly fewer initiation of SI behaviours, but no differences between groups in responding to SI behaviours were observed. Furthermore, LT children who have lower language comprehension scores initiate social interaction more frequently. The results showed that LT children seem to be less active in starting social interaction and participation, but, once they get involved, they respond similarly to TD children of comparable expressive language competence. Additionally, the correlation pattern between the frequency of SI behaviours and language functions showed that LT toddlers with more prominent receptive language delay are more interested in initiating interaction with their partner, thus suggesting that they need a partner's "scaffolding" to overcome this lack.

  17. Interaction and common ground in dementia: Communication across linguistic and cultural diversity in a residential dementia care setting.

    Science.gov (United States)

    Strandroos, Lisa; Antelius, Eleonor

    2017-09-01

    Previous research concerning bilingual people with a dementia disease has mainly focused on the importance of sharing a spoken language with caregivers. While acknowledging this, this article addresses the multidimensional character of communication and interaction. As using spoken language is made difficult as a consequence of the dementia disease, this multidimensionality becomes particularly important. The article is based on a qualitative analysis of ethnographic fieldwork at a dementia care facility. It presents ethnographic examples of different communicative forms, with particular focus on bilingual interactions. Interaction is understood as a collective and collaborative activity. The text finds that a shared spoken language is advantageous, but is not the only source of, nor a guarantee for, creating common ground and understanding. Communicative resources other than spoken language are for example body language, embodiment, artefacts and time. Furthermore, forms of communication are not static but develop, change and are created over time. Ability to communicate is thus not something that one has or has not, but is situationally and collaboratively created. To facilitate this, time and familiarity are central resources, and the results indicate the importance of continuity in interpersonal relations.

  18. Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity

    Science.gov (United States)

    Wong, Miranda Kit-Yi; So, Wing Chee

    2016-01-01

    This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…

  19. Language

    DEFF Research Database (Denmark)

    Sanden, Guro Refsum

    2016-01-01

    Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...... of a company. Language policies and/or strategies can be used to regulate a company’s internal modes of communication. Language management tools can be deployed to address existing and expected language needs. Continuous feedback from the front line ensures strategic learning and reduces the risk of suboptimal...

  20. Children’s Third-Party Understanding of Communicative Interactions in a Foreign Language

    Directory of Open Access Journals (Sweden)

    Narges Afshordi

    2018-01-01

    Full Text Available Two studies explored young children’s understanding of the role of shared language in communication by investigating how monolingual English-speaking children interact with an English speaker, a Spanish speaker, and a bilingual experimenter who spoke both English and Spanish. When the bilingual experimenter spoke in Spanish or English to request objects, four-year-old children, but not three-year-olds, used her language choice to determine whom she addressed (e.g. requests in Spanish were directed to the Spanish speaker. Importantly, children used this cue – language choice – only in a communicative context. The findings suggest that by four years, monolingual children recognize that speaking the same language enables successful communication, even when that language is unfamiliar to them. Three-year-old children’s failure to make this distinction suggests that this capacity likely undergoes significant development in early childhood, although other capacities might also be at play.

  1. Basic speech recognition for spoken dialogues

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-09-01

    Full Text Available Spoken dialogue systems (SDSs) have great potential for information access in the developing world. However, the realisation of that potential requires the solution of several challenging problems, including the development of sufficiently accurate...

  2. Parent-child reading interactions among English and English as a second language speakers in an underserved pediatric clinic in Hawai'i.

    Science.gov (United States)

    Kitabayashi, Kristyn M; Huang, Gary Y; Linskey, Katy R; Pirga, Jason; Bane-Terakubo, Teresa; Lee, Meta T

    2008-10-01

    The purpose of this study was to compare reading patterns between English-speaking and English as a Second Language (ESL) families in a health care setting in Hawai'i. A cross-sectional study was performed at an underserved pediatric primary care clinic in Hawai'i. Caregivers of patients between the ages of 6 months to 5 years were asked questions regarding demographics and parent-child reading interactions. Respondents were categorized into English-speaking or ESL groups based on primary language spoken at home. Pearson chi2 tests and Fisher exact tests were performed to compare demographic differences, reading frequency, and reading attitudes between groups. One-hundred three respondents completed the survey Fifty percent were ESL. All ESL respondents were of Asian-Pacific Islander (API) or mixed Asian ethnicity. All Caucasians in the study (n = 9) were in the English-speaking group. Between the English-speaking (n = 52) and ESL (n = 51) groups, there were no significant statistical differences in age or gender of the child, reading attitudes, or parent's educational status. Parents in the ESL group read to their children significantly fewer days per week than their English-speaking counterparts, had significantly fewer books in the home, and lived significantly fewer years in the United States. The findings suggest that API immigrant families share similar attitudes about reading as English-speaking families in Hawai'i but have significantly fewer books in their household and read significantly less frequently Physicians working with API populations should be aware that immigrant children may have fewer reading interactions and should counsel parents on the importance of reading daily.

  3. Grammar Is a System That Characterizes Talk in Interaction.

    Science.gov (United States)

    Ginzburg, Jonathan; Poesio, Massimo

    2016-01-01

    Much of contemporary mainstream formal grammar theory is unable to provide analyses for language as it occurs in actual spoken interaction. Its analyses are developed for a cleaned up version of language which omits the disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language. Using evidence from linguistics, conversation analysis, multimodal communication, psychology, language acquisition, and neuroscience, we show these aspects of language use are rule governed in much the same way as phenomena captured by conventional grammars. Furthermore, we argue that over the past few years some of the tools required to provide a precise characterizations of such phenomena have begun to emerge in theoretical and computational linguistics; hence, there is no reason for treating them as "second class citizens" other than pre-theoretical assumptions about what should fall under the purview of grammar. Finally, we suggest that grammar formalisms covering such phenomena would provide a better foundation not just for linguistic analysis of face-to-face interaction, but also for sister disciplines, such as research on spoken dialogue systems and/or psychological work on language acquisition.

  4. The gender congruency effect during bilingual spoken-word recognition

    Science.gov (United States)

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  5. Reflection of society and language interaction in Internet-discourse

    Directory of Open Access Journals (Sweden)

    Nefedov Igor Vladislavovich

    2015-09-01

    Full Text Available The article attempts to show the conditioning by extralinguistic factors of the active usage in the online discourse the lexeme maidan, related to it words from the viewpoint of word-building and occasional paronomasia with emotionally-estimated meaning. The lexeme maidan in recent years has become one of the most important discursive phenomenon within new modern-language situation. Events of the end of 2013- beginning of 2014 led to a new political confrontation in Ukraine and as a consequence - to activization of the word maidan. Analysis of linguistic resources, represented in online discourse, suggests that the semantic net of the lexeme has changed considerably: there are new, contextually preconditioned lexical meanings, some of the old meanings were on the periphery, some -got a very narrow scope of usage. In online discourse, language picture of the world is represented by a large number of new words and the intensification of the use of words, long-established in the lexical system. Many of these words have negative semantics and colloquial pejorative and derogatory overtones. This is due to extralinguistic factors - political events in the life of Ukrainian society at the present stage.

  6. ATTILA 2 S. A technical and interactive test language for architecture allowing simultaneity

    International Nuclear Information System (INIS)

    Batllo, M.

    1980-01-01

    The name ATTILA 2 S is inspired from ATLAS, test language adopted by the Department of Defence of America (D.O.D.) but cannot be implemented on our installation. ATTILA 2 S is principally characterized by: its technical vocabulary (P.O.L.), its interactivity, its simultaneity with main job (Multiprogramming and Multiprocessing allowed by multiprocessors architecture. This language has been developed for the Paris C.R.T. system (Photographies analysis system) on Control Data Cyber 72 computer [fr

  7. Language contact phenomena in the language use of speakers of German descent and the significance of their language attitudes

    Directory of Open Access Journals (Sweden)

    Ries, Veronika

    2014-03-01

    Full Text Available Within the scope of my investigation on language use and language attitudes of People of German Descent from the USSR, I find almost regular different language contact phenomena, such as viel bliny habn=wir gbackt (engl.: 'we cooked lots of pancakes' (cf. Ries 2011. The aim of analysis is to examine both language use with regard to different forms of language contact and the language attitudes of the observed speakers. To be able to analyse both of these aspects and synthesize them, different types of data are required. The research is based on the following two data types: everyday conversations and interviews. In addition, the individual speakers' biography is a key part of the analysis, because it allows one to draw conclusions about language attitudes and use. This qualitative research is based on morpho-syntactic and interactional linguistic analysis of authentic spoken data. The data arise from a corpus compiled and edited by myself. My being a member of the examined group allowed me to build up an authentic corpus. The natural language use is analysed from the perspective of different language contact phenomena and potential functions of language alternations. One central issue is: How do speakers use the languages available to them, German and Russian? Structural characteristics such as code switching and discursive motives for these phenomena are discussed as results, together with the socio-cultural background of the individual speaker. Within the scope of this article I present exemplarily the data and results of one speaker.

  8. Guest Comment: Universal Language Requirement.

    Science.gov (United States)

    Sherwood, Bruce Arne

    1979-01-01

    Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)

  9. Becoming "Spanish Learners": Identity and Interaction among Multilingual Children in a Spanish-English Dual Language Classroom

    Science.gov (United States)

    Martínez, Ramón Antonio; Durán, Leah; Hikida, Michiko

    2017-01-01

    This article explores the interactional co-construction of identities among two first-grade students learning Spanish as a third language in a Spanish-English dual language classroom. Drawing on ethnographic and interactional data, the article focuses on a single interaction between these two "Spanish learners" and two of their…

  10. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  11. Social interaction, languaging and the operational conditions for the emergence of observing.

    Science.gov (United States)

    Raimondi, Vincenzo

    2014-01-01

    In order to adequately understand the foundations of human social interaction, we need to provide an explanation of our specific mode of living based on linguistic activity and the cultural practices with which it is interwoven. To this end, we need to make explicit the constitutive conditions for the emergence of the phenomena which relate to language and joint activity starting from their operational-relational matrix. The approach presented here challenges the inadequacy of mentalist models to explain the relation between language and interaction. Recent empirical studies concerning joint attention and language acquisition have led scholars such as Tomasello et al. (2005) to postulate the existence of a universal human "sociocognitive infrastructure" that drives joint social activities and is biologically inherited. This infrastructure would include the skill of precocious intention-reading, and is meant to explain human linguistic development and cultural learning. However, the cognitivist and functionalist assumptions on which this model relies have resulted in controversial hypotheses (i.e., intention-reading as the ontogenetic precursor of language) which take a contentious conception of mind and language for granted. By challenging this model, I will show that we should instead turn ourselves towards a constitutive explanation of language within a "bio-logical" understanding of interactivity. This is possible only by abandoning the cognitivist conception of organism and traditional views of language. An epistemological shift must therefore be proposed, based on embodied, enactive and distributed approaches, and on Maturana's work in particular. The notions of languaging and observing that will be discussed in this article will allow for a bio-logically grounded, theoretically parsimonious alternative to mentalist and spectatorial approaches, and will guide us towards a wider understanding of our sociocultural mode of living.

  12. Social interaction, languaging and the operational conditions for the emergence of observing

    Directory of Open Access Journals (Sweden)

    Vincenzo eRaimondi

    2014-08-01

    Full Text Available In order to adequately understand the foundations of human social interaction, we need to provide an explanation of our specific mode of living based on linguistic activity and the cultural practices with which it is interwoven. To this end, we need to make explicit the constitutive conditions for the emergence of the phenomena which relate to language and joint activity starting from their operational-relational matrix. The approach presented here challenges the inadequacy of mentalist models to explain the relation between language and interaction. Recent empirical studies concerning joint attention and language acquisition have led scholars such as Tomasello and his colleagues to postulate the existence of a universal human sociocognitive infrastructure that drives joint social activities and is biologically inherited. This infrastructure would include the skill of precocious intention-reading, and is meant to explain human linguistic development and cultural learning. However, the cognitivist and functionalist assumptions on which this model relies have resulted in controversial hypotheses (i.e., intention-reading as the ontogenetic precursor of language which take a contentious conception of mind and language for granted. By challenging this model, I will show that we should instead turn ourselves towards a constitutive explanation of language within a bio-logical understanding of interactivity. This is possible only by abandoning the cognitivist conception of organism and traditional views of language. An epistemological shift must therefore be proposed, based on embodied, enactive and distributed approaches, and on Maturana’s work in particular. The notions of languaging and observing that will be discussed in this article will allow for a bio-logically grounded, theoretically parsimonious alternative to mentalist and spectatorial approaches, and will guide us towards a wider understanding of our sociocultural mode of living.

  13. Parent-Child Interaction Therapy (PCIT) in school-aged children with specific language impairment.

    Science.gov (United States)

    Allen, Jessica; Marshall, Chloë R

    2011-01-01

    Parents play a critical role in their child's language development. Therefore, advising parents of a child with language difficulties on how to facilitate their child's language might benefit the child. Parent-Child Interaction Therapy (PCIT) has been developed specifically for this purpose. In PCIT, the speech-and-language therapist (SLT) works collaboratively with parents, altering interaction styles to make interaction more appropriate to their child's level of communicative needs. This study investigates the effectiveness of PCIT in 8-10-year-old children with specific language impairment (SLI) in the expressive domain. It aimed to identify whether PCIT had any significant impact on the following communication parameters of the child: verbal initiations, verbal and non-verbal responses, mean length of utterance (MLU), and proportion of child-to-parent utterances. Sixteen children with SLI and their parents were randomly assigned to two groups: treated or delayed treatment (control). The treated group took part in PCIT over a 4-week block, and then returned to the clinic for a final session after a 6-week consolidation period with no input from the therapist. The treated and control group were assessed in terms of the different communication parameters at three time points: pre-therapy, post-therapy (after the 4-week block) and at the final session (after the consolidation period), through video analysis. It was hypothesized that all communication parameters would significantly increase in the treated group over time and that no significant differences would be found in the control group. All the children in the treated group made language gains during spontaneous interactions with their parents. In comparison with the control group, PCIT had a positive effect on three of the five communication parameters: verbal initiations, MLU and the proportion of child-to-parent utterances. There was a marginal effect on verbal responses, and a trend towards such an effect

  14. Une Progression dans la Strategie Pedagogique pour assurer la Construction de Langage Oral L'ecole Maternelle [A Progression in Teaching Strategies to Ensure Oral Language Building in Nursery School].

    Science.gov (United States)

    Durand, C.

    1997-01-01

    Summarizes progressions between 2 and 6 years of age in children's power of concentration, ability to express ideas, build logical relationships, structure spoken words, and play with the semantic, phonetic, syntactical, and morphological aspects of oral language. Notes that the progression depends on the educator's interaction with the child.…

  15. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2016-07-01

    Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  16. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  17. Interactive Media to Support Language Acquisition for Deaf Students

    Science.gov (United States)

    Parton, Becky Sue; Hancock, Robert; Crain-Dorough, Mindy; Oescher, Jeff

    2009-01-01

    Tangible computing combines digital feedback with physical interactions - an important link for young children. Through the use of Radio Frequency Identification (RFID) technology, a real-world object (i.e. a chair) or a symbolic toy (i.e. a stuffed bear) can be tagged so that students can activate multimedia learning modules automatically. The…

  18. INDIVIDUAL ACCOUNTABILITY IN COOPERATIVE LEARNING: MORE OPPORTUNITIES TO PRODUCE SPOKEN ENGLISH

    Directory of Open Access Journals (Sweden)

    Puji Astuti

    2017-05-01

    Full Text Available The contribution of cooperative learning (CL in promoting second and foreign language learning has been widely acknowledged. Little scholarly attention, however, has been given to revealing how this teaching method works and promotes learners’ improved communicative competence. This qualitative case study explores the important role that individual accountability in CL plays in giving English as a Foreign Language (EFL learners in Indonesia the opportunity to use the target language of English. While individual accountability is a principle of and one of the activities in CL, it is currently under studied, thus little is known about how it enhances EFL learning. This study aims to address this gap by conducting a constructivist grounded theory analysis on participant observation, in-depth interview, and document analysis data drawn from two secondary school EFL teachers, 77 students in the observed classrooms, and four focal students. The analysis shows that through individual accountability in CL, the EFL learners had opportunities to use the target language, which may have contributed to the attainment of communicative competence—the goal of the EFL instruction. More specifically, compared to the use of conventional group work in the observed classrooms, through the activities of individual accountability in CL, i.e., performances and peer interaction, the EFL learners had more opportunities to use spoken English. The present study recommends that teachers, especially those new to CL, follow the preset procedure of selected CL instructional strategies or structures in order to recognize the activities within individual accountability in CL and understand how these activities benefit students.

  19. RAPPORT-BUILDING THROUGH CALL IN TEACHING CHINESE AS A FOREIGN LANGUAGE: AN EXPLORATORY STUDY

    Directory of Open Access Journals (Sweden)

    Wenying Jiang

    2005-05-01

    Full Text Available Technological advances have brought about the ever-increasing utilisation of computer-assisted language learning (CALL media in the learning of a second language (L2. Computer-mediated communication, for example, provides a practical means for extending the learning of spoken language, a challenging process in tonal languages such as Chinese, beyond the realms of the classroom. In order to effectively improve spoken language competency, however, CALL applications must also reproduce the social interaction that lies at the heart of language learning and language use. This study draws on data obtained from the utilisation of CALL in the learning of L2 Chinese to explore whether this medium can be used to extend opportunities for rapport-building in language teaching beyond the face-to-face interaction of the classroom. Rapport's importance lies in its potential to enhance learning, motivate learners, and reduce learner anxiety. To date, CALL's potential in relation to this facet of social interaction remains a neglected area of research. The results of this exploratory study suggest that CALL may help foster learner-teacher rapport and that scaffolding, such as strategically composing rapport-fostering questions in sound-files, is conducive to this outcome. The study provides an instruction model for this application of CALL.

  20. Use of spoken and written Japanese did not protect Japanese-American men from cognitive decline in late life.

    Science.gov (United States)

    Crane, Paul K; Gruhl, Jonathan C; Erosheva, Elena A; Gibbons, Laura E; McCurry, Susan M; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-11-01

    Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900-1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve.

  1. Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/

    Directory of Open Access Journals (Sweden)

    Socorro Cláudia Tavares de Sousa

    2009-01-01

    Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.

  2. Digital gaming and second language development: Japanese learners interactions in a MMORPG

    Directory of Open Access Journals (Sweden)

    Mark Peterson

    2011-04-01

    Full Text Available Massively multiplayer online role-playing games (MMORPGs are identified as valuable arenas for language learning, as they provide access to contexts and types of interaction that are held to be beneficial in second language acquisition research. This paper will describe the development and key features of these games, and explore claims made regarding their value as environments for language learning. The discussion will then examine current research. This is followed by an analysis of the findings from an experimental qualitative study that investigates the interaction and attitudes of Japanese English as a foreign language learners who participated in MMORPG-based game play. The analysis draws attention to the challenging nature of the communication environment and the need for learner training. The findings indicate that system management issues, proficiency levels, the operation of affective factors, and prior gaming experiences appeared to influence participation. The data shows that for the intermediate learners who were novice users, the interplay of these factors appeared to restrict opportunities to engage in beneficial forms of interaction. In a positive finding, it was found that the intermediate and advanced level participants effectively utilized both adaptive and transfer discourse management strategies. Analysis reveals they took the lead in managing their discourse, and actively engaged in collaborative social interaction involving dialog in the target language. Participant feedback suggests that real time computer-based nature of the interaction provided benefits. These include access to an engaging social context, enjoyment, exposure to new vocabulary, reduced anxiety, and valuable opportunities to practice using a foreign language. This paper concludes by identifying areas of interest for future research.

  3. Speech perception and reading: two parallel modes of understanding language and implications for acquiring literacy naturally.

    Science.gov (United States)

    Massaro, Dominic W

    2012-01-01

    I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.

  4. UNDERSTANDING TENOR IN SPOKEN TEXTS IN YEAR XII ENGLISH TEXTBOOK TO IMPROVE THE APPROPRIACY OF THE TEXTS

    Directory of Open Access Journals (Sweden)

    Noeris Meristiani

    2011-07-01

    Full Text Available ABSTRACT: The goal of English Language Teaching is communicative competence. To reach this goal students should be supplied with good model texts. These texts should consider the appropriacy of language use. By analyzing the context of situation which is focused on tenor the meanings constructed to build the relationships among the interactants in spoken texts can be unfolded. This study aims at investigating the interpersonal relations (tenor of the interactants in the conversation texts as well as the appropriacy of their realization in the given contexts. The study was conducted under discourse analysis by applying a descriptive qualitative method. There were eight conversation texts which function as examples in five chapters of a textbook. The data were analyzed by using lexicogrammatical analysis, described, and interpreted contextually. Then, the realization of the tenor of the texts was further analyzed in terms of appropriacy to suggest improvement. The results of the study show that the tenor indicates relationships between friend-friend, student-student, questioners-respondents, mother-son, and teacher-student; the power is equal and unequal; the social distances show frequent contact, relatively frequent contact, relatively low contact, high and low affective involvement, using informal, relatively informal, relatively formal, and formal language. There are also some indications of inappropriacy of tenor realization in all texts. It should be improved in the use of degree of formality, the realization of societal roles, status, and affective involvement. Keywords: context of situation, tenor, appropriacy.

  5. Using a Humanoid Robot to Develop a Dialogue-Based Interactive Learning Environment for Elementary Foreign Language Classrooms

    Science.gov (United States)

    Chang, Chih-Wei; Chen, Gwo-Dong

    2010-01-01

    Elementary school is the critical stage during which the development of listening comprehension and oral abilities in language acquisition occur, especially with a foreign language. However, the current foreign language instructors often adopt one-way teaching, and the learning environment lacks any interactive instructional media with which to…

  6. Language-Building Activities and Interaction Variations with Mixed-Ability ESL University Learners in a Content-Based Course

    Science.gov (United States)

    Serna Dimas, Héctor Manuel; Ruíz Castellanos, Erika

    2014-01-01

    The preparation of both language-building activities and a variety of teacher/student interaction patterns increase both oral language participation and content learning in a course of manual therapy with mixed-language ability students. In this article, the researchers describe their collaboration in a content-based course in English with English…

  7. The Practical Side of Working with Parent-Child Interaction Therapy with Preschool Children with Language Impairments

    Science.gov (United States)

    Klatte, Inge S.; Roulstone, Sue

    2016-01-01

    A common early intervention approach for preschool children with language problems is parent-child interaction therapy (PCIT). PCIT has positive effects for children with expressive language problems. It appears that speech and language therapists (SLTs) conduct this therapy in many different ways. This might be because of the variety of…

  8. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  9. Analyzing the Influence of Language Proficiency on Interactive Book Search Behavior

    DEFF Research Database (Denmark)

    Bogers, Toine; Gäde, Maria; Hall, Mark M.

    2016-01-01

    English content still dominates in many online domains and information systems, despite native English speakers being a minority of its users. However, we know little about how language proficiency influences search behavior in these systems. In this paper, we describe preliminary results from an...... language constraints, a preliminary analysis of native and non-native English speakers indicate little to no meaningful differences in their search behavior.......English content still dominates in many online domains and information systems, despite native English speakers being a minority of its users. However, we know little about how language proficiency influences search behavior in these systems. In this paper, we describe preliminary results from...... an interactive IR experiment with book search behavior and examine how language skills affect this behavior. A total of 97 users from 21 different countries participated in this experiment, resulting in a rich data set including usage data as well as questionnaire feedback. Although participants reported feeling...

  10. Using language for social interaction: Communication mechanisms promote recovery from chronic non-fluent aphasia.

    Science.gov (United States)

    Stahl, Benjamin; Mohr, Bettina; Dreyer, Felix R; Lucchese, Guglielmo; Pulvermüller, Friedemann

    2016-12-01

    Clinical research highlights the importance of massed practice in the rehabilitation of chronic post-stroke aphasia. However, while necessary, massed practice may not be sufficient for ensuring progress in speech-language therapy. Motivated by recent advances in neuroscience, it has been claimed that using language as a tool for communication and social interaction leads to synergistic effects in left perisylvian eloquent areas. Here, we conducted a crossover randomized controlled trial to determine the influence of communicative language function on the outcome of intensive aphasia therapy. Eighteen individuals with left-hemisphere lesions and chronic non-fluent aphasia each received two types of training in counterbalanced order: (i) Intensive Language-Action Therapy (ILAT, an extended form of Constraint-Induced Aphasia Therapy) embedding verbal utterances in the context of communication and social interaction, and (ii) Naming Therapy focusing on speech production per se. Both types of training were delivered with the same high intensity (3.5 h per session) and duration (six consecutive working days), with therapy materials and number of utterances matched between treatment groups. A standardized aphasia test battery revealed significantly improved language performance with ILAT, independent of when this method was administered. In contrast, Naming Therapy tended to benefit language performance only when given at the onset of the treatment, but not when applied after previous intensive training. The current results challenge the notion that massed practice alone promotes recovery from chronic post-stroke aphasia. Instead, our results demonstrate that using language for communication and social interaction increases the efficacy of intensive aphasia therapy. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Emotional and interactional prosody across animal communication systems: A comparative approach to the emergence of language

    Directory of Open Access Journals (Sweden)

    Piera Filippi

    2016-09-01

    Full Text Available Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP paved the way for the evolution of linguistic prosody - and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: i empirical studies on the adaptive value of EIP in nonhuman primates, mammals, songbirds, anurans and insects; ii the beneficial effects of EIP in scaffolding language learning and social development in human infants; iii the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.

  12. A language-based approach to modelling and analysis of Twitter interactions

    DEFF Research Database (Denmark)

    Maggi, Alessandro; Petrocchi, Marinella; Spognardi, Angelo

    2017-01-01

    More than a personal microblogging site, Twitter has been transformed by common use to an information publishing venue, which public characters, media channels and common people daily rely on for, e.g., news reporting and consumption, marketing, and social messaging. The use of Twitter...... in a cooperative and interactive setting calls for the precise awareness of the dynamics regulating message spreading. In this paper, we describe Twitlang, a language for modelling the interactions among Twitter accounts. The associated operational semantics allows users to precisely determine the effects...... of their actions on Twitter, such as post, reply-to or delete tweets. The language is implemented in the form of a Maude interpreter, Twitlanger, which takes a language term as an input and explores the computations arising from the term. By combining the strength of Twitlanger and the Maude model checker...

  13. What you say is not what you get: arguing for artificial languages instead of natural languages in human robot speech interaction

    NARCIS (Netherlands)

    Mubin, O.; Bartneck, C.; Feijs, L.M.G.

    2009-01-01

    The project described hereunder focuses on the design and implementation of a "Artificial Robotic Interaction Language", where the research goal is to find a balance between the effort necessary from the user to learn a new language and the resulting benefit of optimized automatic speech recognition

  14. ESL students learning biology: The role of language and social interactions

    Science.gov (United States)

    Jaipal, Kamini

    This study explored three aspects related to ESL students in a mainstream grade 11 biology classroom: (1) the nature of students' participation in classroom activities, (2) the factors that enhanced or constrained ESL students' engagement in social interactions, and (3) the role of language in the learning of science. Ten ESL students were observed over an eight-month period in this biology classroom. Data were collected using qualitative research methods such as participant observation, audio-recordings of lessons, field notes, semi-structured interviews, short lesson recall interviews and students' written work. The study was framed within sociocultural perspectives, particularly the social constructivist perspectives of Vygotsky (1962, 1978) and Wertsch (1991). Data were analysed with respect to the three research aspects. Firstly, the findings showed that ESL students' preferred and exhibited a variety of participation practices that ranged from personal-individual to socio-interactive in nature. Both personal-individual and socio-interactive practices appeared to support science and language learning. Secondly, the findings indicated that ESL students' engagement in classroom social interactions was most likely influenced by the complex interactions between a number of competing factors at the individual, interpersonal and community/cultural levels (Rogoff, Radziszewska, & Masiello, 1995). In this study, six factors that appeared to enhance or constrain ESL students' engagement in classroom social interactions were identified. These factors were socio-cultural factors, prior classroom practice, teaching practices, affective factors, English language proficiency, and participation in the research project. Thirdly, the findings indicated that language played a significant mediational role in ESL students' learning of science. The data revealed that the learning of science terms and concepts can be explained by a functional model of language that includes: (1

  15. Implications of Hegel's Theories of Language on Second Language Teaching

    Science.gov (United States)

    Wu, Manfred

    2016-01-01

    This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…

  16. Inuit Sign Language: a contribution to sign language typology

    NARCIS (Netherlands)

    Schuit, J.; Baker, A.; Pfau, R.

    2011-01-01

    Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different

  17. Enhancing Children's Language Learning and Cognition Experience through Interactive Kinetic Typography

    Science.gov (United States)

    Lau, Newman M. L.; Chu, Veni H. T.

    2015-01-01

    This research aimed at investigating the method of using kinetic typography and interactive approach to conduct a design experiment for children to learn vocabularies. Typography is the unique art and technique of arranging type in order to make language visible. By adding animated movement to characters, kinetic typography expresses language…

  18. Music and Sign Language to Promote Infant and Toddler Communication and Enhance Parent-Child Interaction

    Science.gov (United States)

    Colwell, Cynthia; Memmott, Jenny; Meeker-Miller, Anne

    2014-01-01

    The purpose of this study was to determine the efficacy of using music and/or sign language to promote early communication in infants and toddlers (6-20 months) and to enhance parent-child interactions. Three groups used for this study were pairs of participants (care-giver(s) and child) assigned to each group: 1) Music Alone 2) Sign Language…

  19. Language Choice and Identity Construction in Peer Interactions: Insights from a Multilingual University in Hong Kong

    Science.gov (United States)

    Gu, Mingyue

    2011-01-01

    Informed by linguistic ecological theory and the notion of identity, this study investigates language uses and identity construction in interactions among students with different linguistic and cultural backgrounds in a multilingual university. Individual and focus-group interviews were conducted with two groups of students: Hong Kong (HK) and…

  20. Language of the Legal Process: An Analysis of Interactions in the "Syariah" Court

    Science.gov (United States)

    Hashim, Azirah; Hassan, Norizah

    2011-01-01

    This study examines interactions from trials in the Syariah court in Malaysia. It focuses on the types of questioning, the choice of language and the linguistic resources employed in this particular context. In the discourse of law, questioning has been a prominent concern particularly in cross-examination and can be considered one of the key…

  1. Do maternal interaction and early language predict phonological awareness in 3- to 4-year-olds?

    NARCIS (Netherlands)

    Silvén, M.; Niemi, P.; Voeten, M.J.M.

    2002-01-01

    The present study reports longitudinal data on how phonological awareness is affected by mother-child interaction and the child's language development. Sixty-six Finnish children were videotaped at 12 and 24 months of age with their mother, during joint play episodes, to assess maternal sensitivity

  2. Project-Based Method as an Effective Means of Interdisciplinary Interaction While Teaching a Foreign Language

    Science.gov (United States)

    Bondar, Irina Alekseevna; Kulbakova, Renata Ivanovna; Svintorzhitskaja, Irina Andreevna; Pilat, Larisa Pavlovna; Zavrumov, Zaur Aslanovich

    2016-01-01

    The article explains how to use a project-based method as an effective means of interdisciplinary interaction when teaching a foreign language on the example of The Institute of service, tourism and design (branch) of the North Caucasus Federal University (Pyatigorsk, Stavropol Territory Russia). The article holds the main objectives of the…

  3. Innovative Second Language Speaking Practice with Interactive Videos in a Rich Internet Application Environment

    Science.gov (United States)

    Pereira, Juan A.; Sanz-Santamaría, Silvia; Montero, Raúl; Gutiérrez, Julián

    2012-01-01

    Attaining a satisfactory level of oral communication in a second language is a laborious process. In this action research paper we describe a new method applied through the use of interactive videos and the Babelium Project Rich Internet Application (RIA), which allows students to practice speaking skills through a variety of exercises. We present…

  4. Developing Interactional Competence by Using TV Series in "English as an Additional Language" Classrooms

    Science.gov (United States)

    Sert, Olcay

    2009-01-01

    This paper uses a combined methodology to analyse the conversations in supplementary audio-visual materials to be implemented in language teaching classrooms in order to enhance the Interactional Competence (IC) of the learners. Based on a corpus of 90.000 words (Coupling Corpus), the author tries to reveal the potentials of using TV series in …

  5. Researching Online Foreign Language Interaction and Exchange: Theories, Methods and Challenges. Telecollaboration in Education. Volume 3

    Science.gov (United States)

    Dooly, Melinda; O'Dowd, Robert

    2012-01-01

    This book provides an accessible introduction to some of the methods and theoretical approaches for investigating foreign language (FL) interaction and exchange in online environments. Research approaches which can be applied to Computer-Mediated Communication (CMC) are outlined, followed by discussion of the way in which tools and techniques for…

  6. Mutually Beneficial Foreign Language Learning: Creating Meaningful Interactions through Video-Synchronous Computer-Mediated Communication

    Science.gov (United States)

    Kato, Fumie; Spring, Ryan; Mori, Chikako

    2016-01-01

    Providing learners of a foreign language with meaningful opportunities for interactions, specifically with native speakers, is especially challenging for instructors. One way to overcome this obstacle is through video-synchronous computer-mediated communication tools such as Skype software. This study reports quantitative and qualitative data from…

  7. Emergent Communities of Practice: Secondary Schools' Interaction with Primary School Foreign Language Teaching and Learning

    Science.gov (United States)

    Evans, Michael; Fisher, Linda

    2012-01-01

    The aim of this paper is to give an account of the response of secondary schools to the primary school foreign language teaching initiative recently introduced by the UK government. The paper also explores defining features of the process of cross-phase interaction and the role that knowledge and collaborative practice plays in generating change…

  8. Virtual Interaction through Video-Web Communication: A Step towards Enriching and Internationalizing Language Learning Programs

    Science.gov (United States)

    Jauregi, Kristi; Banados, Emerita

    2008-01-01

    This paper describes an intercontinental project with the use of interactive tools, both synchronous and asynchronous, which was set up to internationalize academic learning of Spanish language and culture. The objective of this case study was to investigate whether video-web communication tools can contribute to enriching the quality of foreign…

  9. Approaches for Language Identification in Mismatched Environments

    Science.gov (United States)

    2016-09-08

    domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...the process of identifying the language in a spoken speech utterance. In recent years, great improvements in LID system performance have been seen...be the case in practice. Lastly, we conduct an out-of-set experiment where VoA data from 9 other languages (Amharic, Creole, Croatian, English

  10. Spanish as a Second Language when L1 Is Quechua: Endangered Languages and the SLA Researcher

    Science.gov (United States)

    Kalt, Susan E.

    2012-01-01

    Spanish is one of the most widely spoken languages in the world. Quechua is the largest indigenous language family to constitute the first language (L1) of second language (L2) Spanish speakers. Despite sheer number of speakers and typologically interesting contrasts, Quechua-Spanish second language acquisition is a nearly untapped research area,…

  11. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  12. Using State Space Grids to analyze the dynamics of teacher-student interactions in foreign language classrooms

    NARCIS (Netherlands)

    Smit, Nienke; de Bot, Cornelis; van de Grift, Wim

    2016-01-01

    Many scholars have stressed the importance of the role of interaction in the language learning process (Kramsch, 1986; Van Lier, 1996; Ellis, 2000; Walsh, 2011). However, studies on classroom interaction between foreign language (FL) teachers and a group of FL learners are rare, because they are

  13. Concurrent word generation and motor performance: further evidence for language-motor interaction.

    Directory of Open Access Journals (Sweden)

    Amy D Rodriguez

    Full Text Available Embodied/modality-specific theories of semantic memory propose that sensorimotor representations play an important role in perception and action. A large body of evidence supports the notion that concepts involving human motor action (i.e., semantic-motor representations are processed in both language and motor regions of the brain. However, most studies have focused on perceptual tasks, leaving unanswered questions about language-motor interaction during production tasks. Thus, we investigated the effects of shared semantic-motor representations on concurrent language and motor production tasks in healthy young adults, manipulating the semantic task (motor-related vs. nonmotor-related words and the motor task (i.e., standing still and finger-tapping. In Experiment 1 (n = 20, we demonstrated that motor-related word generation was sufficient to affect postural control. In Experiment 2 (n = 40, we demonstrated that motor-related word generation was sufficient to facilitate word generation and finger tapping. We conclude that engaging semantic-motor representations can have a reciprocal influence on motor and language production. Our study provides additional support for functional language-motor interaction, as well as embodied/modality-specific theories.

  14. Quality of caregiver-child play interactions with toddlers born preterm and full term: Antecedents and language outcome.

    Science.gov (United States)

    Loi, Elizabeth C; Vaca, Kelsey E C; Ashland, Melanie D; Marchman, Virginia A; Fernald, Anne; Feldman, Heidi M

    2017-12-01

    Preterm birth may leave long-term effects on the interactions between caregivers and children. Language skills are sensitive to the quality of caregiver-child interactions. Compare the quality of caregiver-child play interactions in toddlers born preterm (PT) and full term (FT) at age 22months (corrected for degree of prematurity) and evaluate the degree of association between caregiver-child interactions, antecedent demographic and language factors, and subsequent language skill. A longitudinal descriptive cohort study. 39 PT and 39 FT toddlers individually matched on sex and socioeconomic status (SES). The outcome measures were dimensions of caregiver-child interactions, rated from a videotaped play session at age 22months in relation to receptive language assessments at ages 18 and 36months. Caregiver intrusiveness was greater in the PT than FT group. A composite score of child interactional behaviors was associated with a composite score of caregiver interactional behaviors. The caregiver composite measure was associated with later receptive vocabulary at 36months. PT-FT group membership did not moderate the association between caregiver interactional behavior and later receptive vocabulary. The quality of caregiver interactional behavior had similar associations with concurrent child interactional behavior and subsequent language outcome in the PT and FT groups. Greater caregiver sensitivity/responsiveness, verbal elaboration, and less intrusiveness support receptive language development in typically developing toddlers and toddlers at risk for language difficulty. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Remote Data Exploration with the Interactive Data Language (IDL)

    Science.gov (United States)

    Galloy, Michael

    2013-01-01

    A difficulty for many NASA researchers is that often the data to analyze is located remotely from the scientist and the data is too large to transfer for local analysis. Researchers have developed the Data Access Protocol (DAP) for accessing remote data. Presently one can use DAP from within IDL, but the IDL-DAP interface is both limited and cumbersome. A more powerful and user-friendly interface to DAP for IDL has been developed. Users are able to browse remote data sets graphically, select partial data to retrieve, import that data and make customized plots, and have an interactive IDL command line session simultaneous with the remote visualization. All of these IDL-DAP tools are usable easily and seamlessly for any IDL user. IDL and DAP are both widely used in science, but were not easily used together. The IDL DAP bindings were incomplete and had numerous bugs that prevented their serious use. For example, the existing bindings did not read DAP Grid data, which is the organization of nearly all NASA datasets currently served via DAP. This project uniquely provides a fully featured, user-friendly interface to DAP from IDL, both from the command line and a GUI application. The DAP Explorer GUI application makes browsing a dataset more user-friendly, while also providing the capability to run user-defined functions on specified data. Methods for running remote functions on the DAP server were investigated, and a technique for accomplishing this task was decided upon.

  16. Conceptual Framework: Development of Interactive Reading Malay Language Learning System (I-ReaMaLLS

    Directory of Open Access Journals (Sweden)

    Ismail Nurulisma

    2018-01-01

    Full Text Available Reading is very important to access knowledge. Reading skills starts during preschool level no matter of the types of languages. At present, there are many preschool children who are still unable to recognize letters or even words. This leads to the difficulties in reading. Therefore, there is a need of intervention in reading to overcome such problems. Thus, technologies were adapted in enhancing learning skills, especially in learning to read among the preschool children. Phonological is one of the factors to be considered to ensure a smooth of transition into reading. Phonological concept enables the first learner to easily learn reading such to learn reading Malay language. The medium of learning to read Malay language can be assisted via the supportive of multimedia technology to enhance the preschool children learning. Thus, an interactive system is proposed via a development of interactive reading Malay language learning system, which is called as I-ReaMaLLS. As a part of the development of I-ReaMaLLS, this paper focus on the development of conceptual framework in developing interactive reading Malay language learning system (I-ReaMaLLS. I-ReaMaLLS is voice based system that facilitates the preschool learner in learning reading Malay language. The conceptual framework of developing I-ReaMaLLS is conceptualized based on the initial study conducted via methods of literature review and observation with the preschool children, aged 5 – 6 years. As the result of the initial study, research objectives have been affirmed that finally contributes to the design of conceptual framework for the development of I-ReaMaLLS.

  17. Structural borrowing: The case of Kenyan Sign Language (KSL) and ...

    African Journals Online (AJOL)

    Kenyan Sign Language (KSL) is a visual gestural language used by members of the deaf community in Kenya. Kiswahili on the other hand is a Bantu language that is used as the national language of Kenya. The two are world's apart, one being a spoken language and the other a signed language and thus their “… basic ...

  18. The communicative teaching task-interactive for teaching Spanish as a foreign language

    Directory of Open Access Journals (Sweden)

    Liliana Valdés Aragón

    2005-06-01

    Full Text Available The teaching tasks that are presented in this article respond to a conception of interactive language learning with strong cognitive and humanist bases. Each main task contains a group of supporting tasks that offer opportunities to the students for interacting with the language they are learning in problem solution which demand their attention to content more than to form. The tasks are directed to the formation of values, the protection of the environment, the communicative competence, the declarative, procedural and attitudinal knowledge, to the development of learning strategies that favor interaction, the exchange of meaning, the reflection, the cooperation, the socialization, and the pleasant learning with a strong cultural component that reflect the life and the history of all the people of the world.

  19. FORMATION OF STUDENTS’ FOREIGN LANGUAGE COMPETENCE IN THE INFORMATIONAL FIELD OF CROSS CULTURAL INTERACTION

    Directory of Open Access Journals (Sweden)

    Vitaly Vyacheslavovich Tomin

    2015-09-01

    Full Text Available Knowledge of foreign languages is becoming an integral feature of competitive persona-lity, ability to engage in cross-cultural communication and productive cross-cultural inte-raction, characterized by an adequate degree of tolerance and multi-ethnic competence, the ability for cross-cultural adaptation, critical thinking and creativity. However, the concept of foreign language competence has so far no clear, unambiguous definitions, thereby indicating the complexity and diversity of the phenomenon, which is an integrative, practice-oriented outcome of the wish and ability for intercultural communication. There have been mentioned a variety of requirements, conditions, principles, objectives, means and forms of foreign language competence forming, among which special attention is paid to non-traditional forms of practical training and information field in a cross-cultural interaction. There have been explained the feasibility of their application, which allows solving a complex of series of educational and teaching tasks more efficiently. There have been clarified the term «information field» in cross-cultural interaction, which is a cross-section of internally inherent in every individual «sections» of knowledge, skills, and experience, arising in certain given educational frameworks and forming a communication channel. The resultative indicators of the formation of foreign language competence and ways to improve its effectiveness are presented.

  20. Using Language Games as a Way to Investigate Interactional Engagement in Human-Robot Interaction

    DEFF Research Database (Denmark)

    Jensen, L. C.

    2016-01-01

    how students' engagement with a social robot can be systematically investigated and evaluated. For this purpose, I present a small user study in which a robot plays a word formation game with a human, in which engagement is determined by means of an analysis of the 'language games' played...

  1. Early relations between language development and the quality of mother-child interaction in very-low-birth-weight children.

    Science.gov (United States)

    Stolt, S; Korja, R; Matomäki, J; Lapinleimu, H; Haataja, L; Lehtonen, L

    2014-05-01

    It is not clearly understood how the quality of early mother-child interaction influences language development in very-low-birth-weight children (VLBW). We aim to analyze associations between early language and the quality of mother-child interaction, and, the predictive value of the features of early mother-child interaction on language development at 24 months of corrected age in VLBW children. A longitudinal prospective follow-up study design was used. The participants were 28 VLBW children and 34 full-term controls. Language development was measured using different methods at 6, 12 and at 24 months of age. The quality of mother-child interaction was assessed using PC-ERA method at 6 and at 12 months of age. Associations between the features of early interaction and language development were different in the groups of VLBW and full-term children. There were no significant correlations between the features of mother-child interaction and language skills when measured at the same age in the VLBW group. Significant longitudinal correlations were detected in the VLBW group especially if the quality of early interactions was measured at six months and language skills at 2 years of age. However, when the predictive value of the features of early interactions for later poor language performance was analyzed separately, the features of early interaction predicted language skills in the VLBW group only weakly. The biological factors may influence on the language development more in the VLBW children than in the full-term children. The results also underline the role of maternal and dyadic factors in early interactions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. English language learners with learning disabilities interacting in a science class within an inclusion setting

    Science.gov (United States)

    Ayala, Vivian Luz

    In today's schools there are by far more students identified with learning disabilities (LD) than with any other disability. The U.S. Department of Education in the year 1997--98 reported that there are 38.13% students with LD in our nations' schools (Smith, Polloway, Patton, & Dowdy, 2001; U.S. Department of Education, 1999). Of those, 1,198,200 are considered ELLs with LD (Baca & Cervantes. 1998). These figures which represent an increase evidence the need to provide these students with educational experiences geared to address both their academic and language needs (Ortiz, 1997; Ortiz, & Garcia, 1995). English language learners with LD must be provided with experiences in the least restrictive environment (LRE) and must be able to share the same kind of social and academic experiences as those students from the general population (Etscheidt & Bartlett, 1999; Lloyd, Kameenui, & Chard, 1997) The purpose of this research was to conduct a detailed qualitative study on classroom interactions to enhance the understanding of the science curriculum in order to foster the understanding of content and facilitate the acquisition of English as a second language (Cummins, 2000; Echevarria, Vogt, & Short, 2000). This study was grounded on the theories of socioconstructivism, second language acquisition, comprehensible input, and classroom interactions. The participants of the study were fourth and fifth grade ELLS with LD in a science elementary school bilingual inclusive setting. Data was collected through observations, semi-structured interviews (students and teacher), video and audio taping, field notes, document analysis, and the Classroom Observation Schedule (COS). The transcriptions of the video and audio tapes were coded to highlight emergent patterns on the type of interactions and language used by the participants. The findings of the study intend to provide information for teachers of ELLs with LD about the implications of using classroom interactions point to

  3. Terminology for the body in social interaction, as appearing in papers published in the journal 'Research on Language and Social Interaction', 1987-2013

    DEFF Research Database (Denmark)

    Nevile, Maurice Richard

    2016-01-01

    This is a list of terms referring generally to the body in descriptions and analyses of social interaction, as used by authors in papers published in ROLSI. The list includes over 200 items, grouped according to common phrasing and within alphabetical order. The list was compiled in preparation...... for the review paper: Nevile, M. (2015) The embodied turn in research on language and social interaction. Research on Language and Social Interaction,48(2): 121-151....

  4. Revising the worksheet with L3: a language and environment foruser-script interaction

    Energy Technology Data Exchange (ETDEWEB)

    Hohn, Michael H.

    2008-01-22

    This paper describes a novel approach to the parameter anddata handling issues commonly found in experimental scientific computingand scripting in general. The approach is based on the familiarcombination of scripting language and user interface, but using alanguage expressly designed for user interaction and convenience. The L3language combines programming facilities of procedural and functionallanguages with the persistence and need-based evaluation of data flowlanguages. It is implemented in Python, has access to all Pythonlibraries, and retains almost complete source code compatibility to allowsimple movement of code between the languages. The worksheet interfaceuses metadata produced by L3 to provide selection of values through thescriptit self and allow users to dynamically evolve scripts withoutre-running the prior versions. Scripts can be edited via text editors ormanipulated as structures on a drawing canvas. Computed values are validscripts and can be used further in other scripts via simplecopy-and-paste operations. The implementation is freely available underan open-source license.

  5. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    window focused over the part which most likely contains an answer to the query. The two systems are integrated into a full spoken query answering system. The prototype can answer queries and questions within the chosen football (soccer) test domain, but the system has the flexibility for being ported...

  6. SPOKEN AYACUCHO QUECHUA, UNITS 11-20.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THE ESSENTIALS OF AYACUCHO GRAMMAR WERE PRESENTED IN THE FIRST VOLUME OF THIS SERIES, SPOKEN AYACUCHO QUECHUA, UNITS 1-10. THE 10 UNITS IN THIS VOLUME (11-20) ARE INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE, AND PRESENT THE STUDENT WITH LENGTHIER AND MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS AS WELL…

  7. SPOKEN CUZCO QUECHUA, UNITS 7-12.

    Science.gov (United States)

    SOLA, DONALD F.; AND OTHERS

    THIS SECOND VOLUME OF AN INTRODUCTORY COURSE IN SPOKEN CUZCO QUECHUA ALSO COMPRISES ENOUGH MATERIAL FOR ONE INTENSIVE SUMMER SESSION COURSE OR ONE SEMESTER OF SEMI-INTENSIVE INSTRUCTION (120 CLASS HOURS). THE METHOD OF PRESENTATION IS ESSENTIALLY THE SAME AS IN THE FIRST VOLUME WITH FURTHER CONTRASTIVE, LINGUISTIC ANALYSIS OF ENGLISH-QUECHUA…

  8. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    Science.gov (United States)

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  9. SPOKEN AYACUCHO QUECHUA. UNITS 1-10.

    Science.gov (United States)

    PARKER, GARY J.; SOLA, DONALD F.

    THIS BEGINNING COURSE IN AYACUCHO QUECHUA, SPOKEN BY ABOUT A MILLION PEOPLE IN SOUTH-CENTRAL PERU, WAS PREPARED TO INTRODUCE THE PHONOLOGY AND GRAMMAR OF THIS DIALECT TO SPEAKERS OF ENGLISH. THE FIRST OF TWO VOLUMES, IT SERVES AS A TEXT FOR A 6-WEEK INTENSIVE COURSE OF 20 CLASS HOURS A WEEK. THE AUTHORS COMPARE AND CONTRAST SIGNIFICANT FEATURES OF…

  10. A Grammar of Spoken Brazilian Portuguese.

    Science.gov (United States)

    Thomas, Earl W.

    This is a first-year text of Portuguese grammar based on the Portuguese of moderately educated Brazilians from the area around Rio de Janeiro. Spoken idiomatic usage is emphasized. An important innovation is found in the presentation of verb tenses; they are presented in the order in which the native speaker learns them. The text is intended to…

  11. Towards Affordable Disclosure of Spoken Word Archives

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; Heeren, W.F.L.; Huijbregts, M.A.H.; Hiemstra, Djoerd; de Jong, Franciska M.G.; Larson, M; Fernie, K; Oomen, J; Cigarran, J.

    2008-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken word archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, the least we want to be

  12. Towards Affordable Disclosure of Spoken Heritage Archives

    NARCIS (Netherlands)

    Larson, M; Ordelman, Roeland J.F.; Heeren, W.F.L.; Fernie, K; de Jong, Franciska M.G.; Huijbregts, M.A.H.; Oomen, J; Hiemstra, Djoerd

    2009-01-01

    This paper presents and discusses ongoing work aiming at affordable disclosure of real-world spoken heritage archives in general, and in particular of a collection of recorded interviews with Dutch survivors of World War II concentration camp Buchenwald. Given such collections, we at least want to

  13. Mapping Students' Spoken Conceptions of Equality

    Science.gov (United States)

    Anakin, Megan

    2013-01-01

    This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…

  14. Parental mode of communication is essential for speech and language outcomes in cochlear implanted children

    DEFF Research Database (Denmark)

    Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina

    2010-01-01

    The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....

  15. Business Spoken English Learning Strategies for Chinese Enterprise Staff

    Institute of Scientific and Technical Information of China (English)

    Han Li

    2013-01-01

    This study addresses the issue of promoting effective Business Spoken English of Enterprise Staff in China.It aims to assess the assessment of spoken English learning methods and identify the difficulties of learning English oral expression concerned business area.It also provides strategies for enhancing Enterprise Staff’s level of Business Spoken English.

  16. Language and Literacy: The Case of India.

    Science.gov (United States)

    Sridhar, Kamal K.

    Language and literacy issues in India are reviewed in terms of background, steps taken to combat illiteracy, and some problems associated with literacy. The following facts are noted: India has 106 languages spoken by more than 685 million people, there are several minor script systems, a major language has different dialects, a language may use…

  17. Social Interaction in Infants' Learning of Second-Language Phonetics: An Exploration of Brain-Behavior Relations.

    Science.gov (United States)

    Conboy, Barbara T; Brooks, Rechele; Meltzoff, Andrew N; Kuhl, Patricia K

    2015-01-01

    Infants learn phonetic information from a second language with live-person presentations, but not television or audio-only recordings. To understand the role of social interaction in learning a second language, we examined infants' joint attention with live, Spanish-speaking tutors and used a neural measure of phonetic learning. Infants' eye-gaze behaviors during Spanish sessions at 9.5-10.5 months of age predicted second-language phonetic learning, assessed by an event-related potential measure of Spanish phoneme discrimination at 11 months. These data suggest a powerful role for social interaction at the earliest stages of learning a new language.

  18. Interactive Technologies of Foreign Language Teaching in Future Marine Specialists’ Training: from Experience of the Danube River Basin Universities

    Directory of Open Access Journals (Sweden)

    Olga Demchenko

    2015-08-01

    Full Text Available The article deals with the investigation of the interactive technologies of foreign language teaching in future marine specialists’ training in the Danube river basin universities. The author gives definitions of the most popular interactive technologies aimed to form communicative competence as a significant component of future mariners’ key competencies. Typology and analysis of some interactive technologies of foreign language teaching in future marine specialists’ training are provided.

  19. Training of Future Civil Engineers in the Area of Foreign Language: Interaction of Educational Paradigms

    Directory of Open Access Journals (Sweden)

    Nordman Irina

    2017-01-01

    Full Text Available The article deals with problems of engineers’ training in higerh school. Problems in the organization of classroom and students’ independent work, in the area of evaluation and control as well as teaching recourses and training methods are pointed out. The role of foreign language in the training of future specialists in the field of construction is highlighted. The necessity of the use of settings of traditional and innovative educational paradigms when training of students in the specialization “Industrial and civil construction” on the discipline “Foreign Language” is proved. The interaction of traditional and innovative teaching resources, trraining methods, as well as evaluation and control means is shown. The conclusions on the effectiveness of interaction of traditional and innovative educational concepts when teaching a foreign language in technical universities are drawn.

  20. Evaluating spoken dialogue systems according to de-facto standards: A case study

    NARCIS (Netherlands)

    Möller, S.; Smeele, P.; Boland, H.; Krebber, J.

    2007-01-01

    In the present paper, we investigate the validity and reliability of de-facto evaluation standards, defined for measuring or predicting the quality of the interaction with spoken dialogue systems. Two experiments have been carried out with a dialogue system for controlling domestic devices. During

  1. Phonological memory in sign language relies on the visuomotor neural system outside the left hemisphere language network.

    Science.gov (United States)

    Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi

    2017-01-01

    Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to

  2. Human processor modelling language (HPML): Estimate working memory load through interaction

    OpenAIRE

    Geisler, J.; Scheben, C.

    2007-01-01

    To operate machines over their user interface may cause high load on human's working memory. This load can decrease performance in the working task significantly if this task is a cognitive challenging one, e. g. diagnosis. With the »Human Processor Modelling Language« (HPML) the interaction activity can be modelled with a directed graph. From such models a condensed indicator value for working memory load can be estimated. Thus different user interface solutions can get compared with respect...

  3. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  4. Spoken sentence production in college students with dyslexia: working memory and vocabulary effects.

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J P

    2018-03-01

    Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.

  5. INTERACTION LEVEL OF SPEAKING ACTIVITIES IN A COURSEBOOK SERIES OF TEACHING TURKISH AS A FOREIGN LANGUAGE

    OpenAIRE

    YAVUZ KIRIK, Muazzez

    2015-01-01

    Informed by the principles of communicative foreign language teaching, this study focuses on the interaction level of speaking activities in the coursebook series of ‘İstanbul- Yabancılar İçin Türkçe Ders Kitabı’. To this end, the study analyzed firstly the rate of two-way speech to one-way speech among speaking activities and then the characteristics of two-way activities were explored with a focus on their compatibility with the nature of real interaction as described in the relevant litera...

  6. Lecturing in one’s first language or in English as a lingua franca

    DEFF Research Database (Denmark)

    Preisler, Bent

    2014-01-01

    The demand for internationalization puts pressure on Danish universities to use English as the language of instruction instead of or in addition to the local language(s). The purpose of this study – though proceeding from the belief that true internationalization seeks to exploit all linguistic...... and multilingual classroom. This case study concerns Danish university teachers' spoken discourse and interaction with students in a Danish-language versus English-language classroom. The data are video recordings of classroom interaction at the University of Roskilde, Denmark. The focus is on the relationship...... between linguistic-pragmatic performance and academic authenticity for university teachers teaching courses in both English and Danish, based on recent sociolinguistic concepts such as “persona,” “stylization,” and “authenticity.” The analysis suggests that it is crucial for teachers' ability...

  7. Spoken commands control robot that handles radioactive materials

    International Nuclear Information System (INIS)

    Phelan, P.F.; Keddy, C.; Beugelsdojk, T.J.

    1989-01-01

    Several robotic systems have been developed by Los Alamos National Laboratory to handle radioactive material. Because of safety considerations, the robotic system must be under direct human supervision and interactive control continuously. In this paper, we describe the implementation of a voice-recognition system that permits this control, yet allows the robot to perform complex preprogrammed manipulations without the operator's intervention. To provide better interactive control, we connected to the robot's control computer, a speech synthesis unit, which provides audible feedback to the operator. Thus upon completion of a task or if an emergency arises, an appropriate spoken message can be reported by the control computer. The training programming and operation of this commercially available system are discussed, as are the practical problems encountered during operations

  8. Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language

    Science.gov (United States)

    Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary

    2015-01-01

    Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…

  9. Componential Skills in Second Language Development of Bilingual Children with Specific Language Impairment

    Science.gov (United States)

    Verhoeven, Ludo; Steenge, Judit; van Leeuwe, Jan; van Balkom, Hans

    2017-01-01

    In this study, we investigated which componential skills can be distinguished in the second language (L2) development of 140 bilingual children with specific language impairment in the Netherlands, aged 6-11 years, divided into 3 age groups. L2 development was assessed by means of spoken language tasks representing different language skills…

  10. Family Language Policy and School Language Choice: Pathways to Bilingualism and Multilingualism in a Canadian Context

    Science.gov (United States)

    Slavkov, Nikolay

    2017-01-01

    This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…

  11. The Role of Language in Interactions with Others on Campus for Rural Appalachian College Students

    Science.gov (United States)

    Dunstan, Stephany Brett; Jaeger, Audrey J.

    2016-01-01

    Dialects of English spoken in rural, Southern Appalachia are heavily stigmatized in mainstream American culture, and speakers of Appalachian dialects are often subject to prejudice and stereotypes which can be detrimental in educational settings. We explored the experiences of rural, Southern Appalachian college students and the role speaking a…

  12. Micro Language Planning and Cultural Renaissance in Botswana

    Science.gov (United States)

    Alimi, Modupe M.

    2016-01-01

    Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…

  13. Frequency and Pattern of Learner-Instructor Interaction in an Online English Language Learning Environment in Vietnam

    Science.gov (United States)

    Pham, Thach; Thalathoti, Vijay; Dakich, Eva

    2014-01-01

    This study examines the frequency and pattern of interpersonal interactions between the learners and instructors of an online English language learning course offered at a Vietnamese university. The paper begins with a review of literature on interaction type, pattern and model of interaction followed by a brief description of the online…

  14. DEVELOPMENT OF MOBILE LEARNING BASED- INTERACTIVE MULTIMEDIA IN PROGRAMMING LANGUAGE CLASS AT STAIN BATUSANGKAR

    Directory of Open Access Journals (Sweden)

    Lita Sari Muchlis

    2018-04-01

    Full Text Available This study aims at developing mobile learning-based interactive media in programming language I subject. This research uses the ADDIE model, in which the proposed instructional media are tested to students of Informatics Management study program at STAIN Batusangkar, particularly in Programming Language course I. Data collection was done by distributing the questionnaires. At first, the need analysis was conducted by observing the related phenomena and previous research. Next, after the designing stage, the product was validated by three experts. As the result, the product, in terms of content, was 81,05 categorised very valid, besides in terms of design, it was valid with 85,6 score. In terms of practicality, the product was applied to the students. The result shows that the product was practical to use in Progamming Language course I. In order to find out its effectivity, the product was tested twice, before and after treatment. The mean score of post-test result was higher t “test” 0,001<0,05 than that of the pre-rest. Based on data analysis both design validation by experts and test results of the students, then the interactive online learning media is recommended to be developed for STAIN Batusangkar students.

  15. Schools and Languages in India.

    Science.gov (United States)

    Harrison, Brian

    1968-01-01

    A brief review of Indian education focuses on special problems caused by overcrowded schools, insufficient funding, and the status of education itself in the Indian social structure. Language instruction in India, a complex issue due largely to the numerous official languages currently spoken, is commented on with special reference to the problem…

  16. Language as a Status Symbol of Power in Social Interactions at a Multicultural School in the City of Medan

    Directory of Open Access Journals (Sweden)

    Ahmed Fernanda Desky

    2017-08-01

    Full Text Available One’s habit in language use is influenced by daily social life structures thereby creating different interaction patterns both individually or as a group. Sociology of language critically analyzes the use of language as a symbol of power which dominates the arena in a multicultural school. This research utilizes mixed methods as it is considered capable of finding and answering the issues and problems under examination. The location of research was Sultan Iskandar Muda High School which is the only multicultural education curriculum based pilot school in the city of Medan. The informants in this study were the principal, teachers, and students while the respondents were samples of high school students totaling 86 individuals. Research results show that one’s power in language use is determined by one’s interest in using language. School power and individual power has different portions when positioning one’s self during interactions. Although power is coercive in nature, the community must submit to rules of the school. The power of the school in determining language emphasizes values of nationalism, which is different to individual or group power which adjusts the language to the situation at hand so that relations of language use has its own portion of interaction in the multicultural school.

  17. Learn English or die: The effects of digital games on interaction and willingness to communicate in a foreign language

    Directory of Open Access Journals (Sweden)

    Hayo Reinders

    2011-04-01

    Full Text Available In recent years there has been a lot of interest in the potential role of digital games in language education. Playing digital games is said to be motivating to students and to benefit the development of social skills, such as collaboration, and metacognitive skills such as planning and organisation. An important potential benefit is also that digital games encourage the use of the target language in a non-threatening environment. Willingness to communicate has been shown to affect second language acquisition in a number of ways and it is therefore important to investigate if there is a connection between playing games and learners’ interaction in the target language. In this article we report on the results of a pilot study that investigated the effects of playing an online multiplayer game on the quantity and quality of second language interaction in the game and on participants’ willingness to communicate in the target language. We will show that digital games can indeed affect second language interaction patterns and contribute to second language acquisition, but that this depends, like in all other teaching and learning environments, on careful pedagogic planning of the activity.

  18. Situated dialog in speech-based human-computer interaction

    CERN Document Server

    Raux, Antoine; Lane, Ian; Misu, Teruhisa

    2016-01-01

    This book provides a survey of the state-of-the-art in the practical implementation of Spoken Dialog Systems for applications in everyday settings. It includes contributions on key topics in situated dialog interaction from a number of leading researchers and offers a broad spectrum of perspectives on research and development in the area. In particular, it presents applications in robotics, knowledge access and communication and covers the following topics: dialog for interacting with robots; language understanding and generation; dialog architectures and modeling; core technologies; and the analysis of human discourse and interaction. The contributions are adapted and expanded contributions from the 2014 International Workshop on Spoken Dialog Systems (IWSDS 2014), where researchers and developers from industry and academia alike met to discuss and compare their implementation experiences, analyses and empirical findings.

  19. How mother tongue and the second language interact with acquisition of a foreign language for year six students

    DEFF Research Database (Denmark)

    Slåttvik, Anja; Nielsen, Henrik Balle

    This is a presentation of a current study of how teaching of fiction is carried out in the subjectEnglish as a foreign language in year six in two Danish Schools. There is a particular focus on 6multilingual students and their third language acquisition perspective. The aim is to establishknowledge...... on multilingual students’ understanding of material and content in the EFLclassroom and on a long-term basis to focus foreign language teachers’ attention tocircumstances that challenge students learning a foreign language in a multilingualenvironment....

  20. Computer Assisted Testing of Spoken English: A Study of the SFLEP College English Oral Test System in China

    Directory of Open Access Journals (Sweden)

    John Lowe

    2009-06-01

    Full Text Available This paper reports on the on-going evaluation of a computer-assisted system (CEOTS for the assessing of spoken English skills among Chinese university students. This system is being developed to deal with the negative backwash effects of the present system of assessment of speaking skills which is only available to a tiny minority. We present data from a survey of students at the developing institution (USTC, with follow-up interviews and further interviews with English language teachers, to gauge the reactions to the test and its impact on language learning. We identify the key issue as being one of validity, with a tension existing between construct and consequential validities of the existing system and of CEOTS. We argue that a computer-based system seems to offer the only solution to the negative backwash problem but the development of the technology required to meet current construct validity demands makes this a very long term prospect. We suggest that a compromise between the competing forms of validity must therefore be accepted, probably well before a computer-based system can deliver the level of interaction with the examinees that would emulate the present face-to-face mode.

  1. When words fail us: insights into language processing from developmental and acquired disorders.

    Science.gov (United States)

    Bishop, Dorothy V M; Nation, Kate; Patterson, Karalyn

    2014-01-01

    Acquired disorders of language represent loss of previously acquired skills, usually with relatively specific impairments. In children with developmental disorders of language, we may also see selective impairment in some skills; but in this case, the acquisition of language or literacy is affected from the outset. Because systems for processing spoken and written language change as they develop, we should beware of drawing too close a parallel between developmental and acquired disorders. Nevertheless, comparisons between the two may yield new insights. A key feature of connectionist models simulating acquired disorders is the interaction of components of language processing with each other and with other cognitive domains. This kind of model might help make sense of patterns of comorbidity in developmental disorders. Meanwhile, the study of developmental disorders emphasizes learning and change in underlying representations, allowing us to study how heterogeneity in cognitive profile may relate not just to neurobiology but also to experience. Children with persistent language difficulties pose challenges both to our efforts at intervention and to theories of learning of written and spoken language. Future attention to learning in individuals with developmental and acquired disorders could be of both theoretical and applied value.

  2. Pronoun forms and courtesy in spoken language in Tunja, Colombia

    Directory of Open Access Journals (Sweden)

    Gloria Avendaño de Barón

    2014-05-01

    Full Text Available This article presents the results of a research project whose aims were the following: to determine the frequency of the use of pronoun forms in polite treatment sumercé, usted and tú, according to differences in gender, age and level of education, among speakers in Tunja; to describe the sociodiscursive variations and to explain the relationship between usage and courtesy. The methodology of the Project for the Sociolinguistic Study of Spanish in Spain and in Latin America (PRESEEA was used, and a sample of 54 speakers was taken. The results indicate that the most frequently used pronoun in Tunja to express friendliness and affection is sumercé, followed by usted and tú; women and men of different generations and levels of education alternate the use of these three forms in the context of narrative, descriptive, argumentative and explanatory speech.

  3. Porting a spoken language identification systen to a new environment.

    CSIR Research Space (South Africa)

    Peche, M

    2008-11-01

    Full Text Available A speech processing system is often required to perform in a different environment than the one for which it was initially developed. In such a case, data from the new environment may be more limited in quantity and of poorer quality than...

  4. Spoken language identification system adaptation in under-resourced environments

    CSIR Research Space (South Africa)

    Kleynhans, N

    2013-12-01

    Full Text Available Speech Recognition (ASR) systems in the developing world is severely inhibited. Given that few task-specific corpora exist and speech technology systems perform poorly when deployed in a new environment, we investigate the use of acoustic model adaptation...

  5. Computational Interpersonal Communication: Communication Studies and Spoken Dialogue Systems

    Directory of Open Access Journals (Sweden)

    David J. Gunkel

    2016-09-01

    Full Text Available With the advent of spoken dialogue systems (SDS, communication can no longer be considered a human-to-human transaction. It now involves machines. These mechanisms are not just a medium through which human messages pass, but now occupy the position of the other in social interactions. But the development of robust and efficient conversational agents is not just an engineering challenge. It also depends on research in human conversational behavior. It is the thesis of this paper that communication studies is best situated to respond to this need. The paper argues: 1 that research in communication can supply the information necessary to respond to and resolve many of the open problems in SDS engineering, and 2 that the development of SDS applications can provide the discipline of communication with unique opportunities to test extant theory and verify experimental results. We call this new area of interdisciplinary collaboration “computational interpersonal communication” (CIC

  6. MINORITY LANGUAGES IN ESTONIAN SEGREGATIVE LANGUAGE ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    Elvira Küün

    2011-01-01

    Full Text Available The goal of this project in Estonia was to determine what languages are spoken by students from the 2nd to the 5th year of basic school at their homes in Tallinn, the capital of Estonia. At the same time, this problem was also studied in other segregated regions of Estonia: Kohtla-Järve and Maardu. According to the database of the population census from the year 2000 (Estonian Statistics Executive Office's census 2000, there are representatives of 142 ethnic groups living in Estonia, speaking a total of 109 native languages. At the same time, the database doesn’t state which languages are spoken at homes. The material presented in this article belongs to the research topic “Home Language of Basic School Students in Tallinn” from years 2007–2008, specifically financed and ordered by the Estonian Ministry of Education and Research (grant No. ETF 7065 in the framework of an international study called “Multilingual Project”. It was determined what language is dominating in everyday use, what are the factors for choosing the language for communication, what are the preferred languages and language skills. This study reflects the actual trends of the language situation in these cities.

  7. Patterns of Negotiation of Meaning in English as Second Language Learners’ Interactions

    Directory of Open Access Journals (Sweden)

    Ebrahim Samani

    2015-02-01

    Full Text Available Problem Statement: The Internet as a tool that presents many challenges has drawn the attention of researchers in the field of education and especially foreign language teaching. However, there has been a lack of information about the true nature of these environments. In recent years, determination of the patterns of negotiation of meaning as a way to delve in these environments has grown in popularity. Purpose of the Study: The current study was an effort to determine the types and frequencies of negotiation of meaning in the interaction of Malaysian students as English as a second language learners and, furthermore, to compare findings of this study with correspondent previous studies.  To this end, two research questions were posed for this study: (a what types of negotiation of meaning emerge in text-based synchronous CMC environments? and (b Are there any differences between findings of this study and previous studies in terms of negotiation of meaning functions in this environment?  Method: Participants of this study were fourteen English as second language learners at Universiti Putra Malaysia (UPM. They were involved in a series of discussions over selected short stories. Analysis of students’ chat logs was carried out through computer - mediated discourse analysis (CMDA. Findings and Results: This study yielded 10 types of functions in negotiation of meaning, which are clarification request, confirmation, confirmation check, correction or self correction, elaboration, elaboration request, reply clarification or definition, reply confirmation, reply elaboration, and vocabulary check.  Furthermore, findings of this study indicated that students negotiated with an average of 2.10 per 100 words. According to the findings of this study, the most - frequently used functions were confirmation, elaboration, and elaboration request and the least frequently used functions were vocabulary check, reply confirmation, and reply clarification

  8. [Assessment of pragmatics from verbal spoken data].

    Science.gov (United States)

    Gallardo-Paúls, B

    2009-02-27

    Pragmatic assessment is usually complex, long and sophisticated, especially for professionals who lack specific linguistic education and interact with impaired speakers. To design a quick method of assessment that will provide a quick general evaluation of the pragmatic effectiveness of neurologically affected speakers. This first filter will allow us to decide whether a detailed analysis of the altered categories should follow. Our starting point was the PerLA (perception, language and aphasia) profile of pragmatic assessment designed for the comprehensive analysis of conversational data in clinical linguistics; this was then converted into a quick questionnaire. A quick protocol of pragmatic assessment is proposed and the results found in a group of children with attention deficit hyperactivity disorder are discussed.

  9. Language Philosophy in the context of knowledge organization in the interactive virtual platform

    Directory of Open Access Journals (Sweden)

    Luciana De Souza Gracioso

    2012-12-01

    Full Text Available Over the past years we have pursued epistemological paths that enabled us to reflect on the meaning of language as information, especially in the interactive virtual environments. The main objective of this investigation did not specifically aim at the identification or development of methodological tools, but rather the configuration of a theoretical discourse framework about the pragmatic epistemological possibilities of study and research in the Science of Information within the context of information actions in virtual technology. Thus, we present our thoughts and conjectures about the prerogatives and the obstacles encountered in that theoretical path, concluding with some communicative implications that are inherent to the meaning of information from its use, which in turn, configure the informational activities on the Internet with regard to the existing interactive platforms, better known as Web 2.0, or Pragmatic Web.

  10. Recording voiceover the spoken word in media

    CERN Document Server

    Blakemore, Tom

    2015-01-01

    The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f

  11. On the Usability of Spoken Dialogue Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Bo

     This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...

  12. The Functional Organisation of the Fronto-Temporal Language System: Evidence from Syntactic and Semantic Ambiguity

    Science.gov (United States)

    Rodd, Jennifer M.; Longe, Olivia A.; Randall, Billi; Tyler, Lorraine K.

    2010-01-01

    Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities…

  13. Notes from the Field: Lolak--Another Moribund Language of Indonesia, with Supporting Audio

    Science.gov (United States)

    Lobel, Jason William; Paputungan, Ade Tatak

    2017-01-01

    This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…

  14. Language and Interactional Discourse: Deconstrusting the Talk- Generating Machinery in Natural Convresation

    Directory of Open Access Journals (Sweden)

    Amaechi Uneke Enyi

    2015-08-01

    Full Text Available The study entitled. “Language and Interactional Discourse: Deconstructing the Talk - Generating Machinery in Natural Conversation,” is an analysis of spontaneous and informal conversation. The study, carried out in the theoretical and methodological tradition of Ethnomethodology, was aimed at explicating how ordinary talk is organized and produced, how people coordinate their talk –in- interaction, how meanings are determined, and the role of talk in the wider social processes. The study followed the basic assumption of conversation analysis which is, that talk is not just a product of two ‘speakers - hearers’ who attempt to exchange information or convey messages to each other. Rather, participants in conversation are seen to be mutually orienting to, and collaborating in order to achieve orderly and meaningful communication. The analytic objective is therefore to make clear these procedures on which speakers rely to produce utterances and by which they make sense of other speakers’ talk. The datum used for this study was a recorded informal conversation between two (and later three middle- class civil servants who are friends. The recording was done in such a way that the participants were not aware that they were being recorded. The recording was later transcribed in a way that we believe is faithful to the spontaneity and informality of the talk. Our finding showed that conversation has its own features and is an ordered and structured social day by- day event. Specifically, utterances are designed and informed by organized procedures, methods and resources which are tied to the contexts in which they are produced, and which participants are privy to by virtue of their membership of a culture or a natural language community.  Keywords: Language, Discourse and Conversation

  15. Value of Web-based learning activities for nursing students who speak English as a second language.

    Science.gov (United States)

    Koch, Jane; Salamonson, Yenna; Du, Hui Yun; Andrew, Sharon; Frost, Steven A; Dunncliff, Kirstin; Davidson, Patricia M

    2011-07-01

    There is an increasing need to address the educational needs of students with English as a second language. The authors assessed the value of a Web-based activity to meet the needs of students with English as a second language in a bioscience subject. Using telephone contact, we interviewed 21 Chinese students, 24 non-Chinese students with English as a second language, and 7 native English-speaking students to identify the perception of the value of the intervention. Four themes emerged from the qualitative data: (1) Language is a barrier to achievement and affects self-confidence; (2) Enhancement intervention promoted autonomous learning; (3) Focusing on the spoken word increases interaction capacity and self-confidence; (4) Assessment and examination drive receptivity and sense of importance. Targeted strategies to promote language acculturation and acquisition are valued by students. Linking language acquisition skills to assessment tasks is likely to leverage improvements in competence. Copyright 2011, SLACK Incorporated.

  16. Interactions of Cultures and Top People of Wikipedia from Ranking of 24 Language Editions

    Science.gov (United States)

    Eom, Young-Ho; Aragón, Pablo; Laniado, David; Kaltenbrunner, Andreas; Vigna, Sebastiano; Shepelyansky, Dima L.

    2015-01-01

    Wikipedia is a huge global repository of human knowledge that can be leveraged to investigate interwinements between cultures. With this aim, we apply methods of Markov chains and Google matrix for the analysis of the hyperlink networks of 24 Wikipedia language editions, and rank all their articles by PageRank, 2DRank and CheiRank algorithms. Using automatic extraction of people names, we obtain the top 100 historical figures, for each edition and for each algorithm. We investigate their spatial, temporal, and gender distributions in dependence of their cultural origins. Our study demonstrates not only the existence of skewness with local figures, mainly recognized only in their own cultures, but also the existence of global historical figures appearing in a large number of editions. By determining the birth time and place of these persons, we perform an analysis of the evolution of such figures through 35 centuries of human history for each language, thus recovering interactions and entanglement of cultures over time. We also obtain the distributions of historical figures over world countries, highlighting geographical aspects of cross-cultural links. Considering historical figures who appear in multiple editions as interactions between cultures, we construct a network of cultures and identify the most influential cultures according to this network. PMID:25738291

  17. Interactions of cultures and top people of Wikipedia from ranking of 24 language editions.

    Directory of Open Access Journals (Sweden)

    Young-Ho Eom

    Full Text Available Wikipedia is a huge global repository of human knowledge that can be leveraged to investigate interwinements between cultures. With this aim, we apply methods of Markov chains and Google matrix for the analysis of the hyperlink networks of 24 Wikipedia language editions, and rank all their articles by PageRank, 2DRank and CheiRank algorithms. Using automatic extraction of people names, we obtain the top 100 historical figures, for each edition and for each algorithm. We investigate their spatial, temporal, and gender distributions in dependence of their cultural origins. Our study demonstrates not only the existence of skewness with local figures, mainly recognized only in their own cultures, but also the existence of global historical figures appearing in a large number of editions. By determining the birth time and place of these persons, we perform an analysis of the evolution of such figures through 35 centuries of human history for each language, thus recovering interactions and entanglement of cultures over time. We also obtain the distributions of historical figures over world countries, highlighting geographical aspects of cross-cultural links. Considering historical figures who appear in multiple editions as interactions between cultures, we construct a network of cultures and identify the most influential cultures according to this network.

  18. First languages and las technologies for education

    Directory of Open Access Journals (Sweden)

    Julio VERA VILA

    2013-12-01

    Full Text Available This article is a reflection on how each human being’s learning process and the cultural development of our species are connected to the possibility of translating reality –what we think, what we feel, our interaction- a system of signs that, having shared meanings, enrich our intrapersonal and interpersonal communication. Spoken language was the first technology but being well prepared genetically for it, we learn it through immersion; the rest of them, from written language to hypermedia, have to be well taught and even better learned.We conclude by highlighting the necessity of taking advantage of the benefits provided by the new technologies available nowadays in order to overcome the digital divide, without forgetting others such as literacy acquisition, which are the base of new technologies. Therefore we need a theory and practice of education which comprises its complexity and avoids simplistic reductionism.  

  19. Teaching English as a "Second Language" in Kenya and the United States: Convergences and Divergences

    Science.gov (United States)

    Roy-Campbell, Zaline M.

    2015-01-01

    English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…

  20. Bridging the Gap: The Development of Appropriate Educational Strategies for Minority Language Communities in the Philippines

    Science.gov (United States)

    Dekker, Diane; Young, Catherine

    2005-01-01

    There are more than 6000 languages spoken by the 6 billion people in the world today--however, those languages are not evenly divided among the world's population--over 90% of people globally speak only about 300 majority languages--the remaining 5700 languages being termed "minority languages". These languages represent the…

  1. Phonological Sketch of the Sida Language of Luang Namtha, Laos

    Directory of Open Access Journals (Sweden)

    Nathan Badenoch

    2017-07-01

    Full Text Available This paper describes the phonology of the Sida language, a Tibeto-Burman language spoken by approximately 3,900 people in Laos and Vietnam. The data presented here are the variety spoken in Luang Namtha province of northwestern Laos, and focuses on a synchronic description of the fundamentals of the Sida phonological systems. Several issues of diachronic interest are also discussed in the context of the diversity of the Southern Loloish group of languages, many of which are spoken in Laos and have not yet been described in detail.

  2. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  3. A grammar of Abui : A Papuan language of Alor

    NARCIS (Netherlands)

    Kratochvil, František

    2007-01-01

    This work contains the first comprehensive description of Abui, a language of the Trans New Guinea family spoken approximately by 16,000 speakers in the central part of the Alor Island in Eastern Indonesia. The description focuses on the northern dialect of Abui as spoken in the village

  4. Interactive language learning by robots: the transition from babbling to word forms.

    Science.gov (United States)

    Lyon, Caroline; Nehaniv, Chrystopher L; Saunders, Joe

    2012-01-01

    The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language

  5. Beyond Languages, beyond Modalities: Transforming the Study of Semiotic Repertoires

    Science.gov (United States)

    Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina

    2017-01-01

    This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…

  6. Teaching and Learning Sign Language as a “Foreign” Language ...

    African Journals Online (AJOL)

    In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...

  7. The Impact of Biculturalism on Language and Literacy Development: Teaching Chinese English Language Learners

    Science.gov (United States)

    Palmer, Barbara C.; Chen, Chia-I; Chang, Sara; Leclere, Judith T.

    2006-01-01

    According to the 2000 United States Census, Americans age five and older who speak a language other than English at home grew 47 percent over the preceding decade. This group accounts for slightly less than one in five Americans (17.9%). Among the minority languages spoken in the United States, Asian-language speakers, including Chinese and other…

  8. Spoken Grammar: Where Are We and Where Are We Going?

    Science.gov (United States)

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  9. Attention to spoken word planning: Chronometric and neuroimaging evidence

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,

  10. Heidelberg Interaction Training for Language Promotion in Early Childhood Settings (HIT)

    Science.gov (United States)

    Buschmann, Anke; Sachse, Steffi

    2018-01-01

    Beside parents, teachers in early childhood education and care have the greatest potential to foster language acquisition in children. This is especially important for children with language delays, language disorders or bi-/multilingual children. However, they present teachers with a particular challenge in language support. Therefore, integrated…

  11. Teaching natural language to computers

    OpenAIRE

    Corneli, Joseph; Corneli, Miriam

    2016-01-01

    "Natural Language," whether spoken and attended to by humans, or processed and generated by computers, requires networked structures that reflect creative processes in semantic, syntactic, phonetic, linguistic, social, emotional, and cultural modules. Being able to produce novel and useful behavior following repeated practice gets to the root of both artificial intelligence and human language. This paper investigates the modalities involved in language-like applications that computers -- and ...

  12. I Feel You: The Design and Evaluation of a Domotic Affect-Sensitive Spoken Conversational Agent

    Directory of Open Access Journals (Sweden)

    Juan Manuel Montero

    2013-08-01

    Full Text Available We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO. In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA. NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users’ frustrations and, ultimately, improving their satisfaction.

  13. The socially weighted encoding of spoken words: a dual-route approach to speech perception.

    Science.gov (United States)

    Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B

    2013-01-01

    Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  14. Seven-star needle stimulation improves language and social interaction of children with autistic spectrum disorders.

    Science.gov (United States)

    Chan, Agnes S; Cheung, Mei-Chun; Sze, Sophia L; Leung, Winnie W

    2009-01-01

    This is a randomized controlled trial that aimed to evaluate the effect of the Seven-star Needle Stimulation treatment on children with Autistic Spectrum Disorders (ASD). Thirty-two children with ASD were assigned randomly into the treatment and control groups. Children in the treatment group underwent 30 sessions of stimulation over 6 weeks, while children in the control group were on a waiting list and did not receive treatment during this period of time. Intervention consisted of a treatment regime comprising of 30 sessions of Seven-star Needle Stimulation, delivered over 6 weeks. Each session lasted 5 to 10 min, children in the treatment group were stimulated at the front and back sides of their body and the head by using Seven-star Needles. The change in the children's behavior was evaluated using parents' report and neurophysiological changes were measured by quantitative EEG (qEEG). Results showed that the treatment group demonstrated significant improvement in language and social interaction, but not in stereotyped behavior or motor function, compared to the control group. qEEG spectral amplitudes in the treatment, but not in the control group, were also reduced significantly. The results suggested that Seven-star Needle Stimulation might be an effective intervention to improve language and social functioning of children with ASD.

  15. Shared language:Towards more effective communication.

    Science.gov (United States)

    Thomas, Joyce; McDonagh, Deana

    2013-01-01

    The ability to communicate to others and express ourselves is a basic human need. As we develop our understanding of the world, based on our upbringing, education and so on, our perspective and the way we communicate can differ from those around us. Engaging and interacting with others is a critical part of healthy living. It is the responsibility of the individual to ensure that they are understood in the way they intended.Shared language refers to people developing understanding amongst themselves based on language (e.g. spoken, text) to help them communicate more effectively. The key to understanding language is to first notice and be mindful of your language. Developing a shared language is an ongoing process that requires intention and time, which results in better understanding.Shared language is critical to collaboration, and collaboration is critical to business and education. With whom and how many people do you connect? Your 'shared language' makes a difference in the world. So, how do we successfully do this? This paper shares several strategies.Your sphere of influence will carry forward what and how you are communicating. Developing and nurturing a shared language is an essential element to enhance communication and collaboration whether it is simply between partners or across the larger community of business and customers. Constant awareness and education is required to maintain the shared language. We are living in an increasingly smaller global community. Business is built on relationships. If you invest in developing shared language, your relationships and your business will thrive.

  16. Sectional microprocessor based microcomputer and its application to express analysis using interactive language

    International Nuclear Information System (INIS)

    Lang, I.; Leveleki, L.; Salai, M.; Turani, D.

    1984-01-01

    Sectional microprocessor TPA-L/128H based mini-computer being a part of the TPA-8 computer family is developed. A substantial increase of the computer operation rate is attained at the expense of microprogram monitoring. The central processor is constructed on the base of the AM2900 sectional microprocessor elements. The TPA-L/128H computer is program compatible with TPA-8 computer, perfectly equipped with software: high level languages as well as OS/L, COS/H, RTS/H, PAL/128, WPS, TEASYS-8 and IL 128 ensuring statistical data processing, physical experiments automation and interactive experimental data processing. The real time basis problems and CAMAC devices monitoring are efficiently solved

  17. THE CONTRIBUTION OF SOCIOCULTURAL THEORY TO THE PROBLEM OF INSTRUCTIONAL INTERACTIONS IN THE SECOND LANGUAGE CLASSROOM

    Directory of Open Access Journals (Sweden)

    Chernova, N.A.

    2018-03-01

    Full Text Available The article deals with the concept of a continuum of regulation being also important to understanding Vygotsky’s view of cognitive development which clearly suggests that communicative collaboration with adults or more skilled peers contributes to the development of self-regulation, that is, the capacity for independent problem solving and self-directed activity. Attention is drawn to the fact that in the language classroom, using sociocultural theory and its tenets as a framework, we would see a highly interactive classroom, where the students’ zone of proximal development is identified through strategies such as portfolios, and dialogue journals. Necessity of compiling a textbook based on the above-mentioned principles is stressed.

  18. PPC - an interactive preprocessor/compiler for the DSNP simulation language

    International Nuclear Information System (INIS)

    Mahannah, J.A.; Schor, A.L.

    1986-01-01

    The PPC preprocessor/compiler was developed for the dynamic simulator for nuclear power plant DSNP simulation language. The goal of PPC is to provide an easy-to-use, interactive programming environment that will aid both the beginner and well-seasoned DSNP programmer. PPC simplifies the steps of the simulation development process for any user. All will benefit from the on-line help facilities, easy manipulation of modules, the elimination of syntax errors, and the general systematic approach. PPC is a very structured and modular program that allows for easy expansion and modification. Written entirely in C, it is fast, compact, and portable. Used as a front end, it greatly enhances the DSNP desirability as a simulation tool for education and research

  19. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  20. Generation of a command language for nuclear signal and image processing on the basis of a general interactive system

    International Nuclear Information System (INIS)

    Pretschner, D.P.; Pfeiffer, G.; Deutsches Elektronen-Sychnchrotron

    1981-01-01

    In the field of nuclear medicine, BASIC and FORTRAN are currently being favoured as higher-level programming languages for computer-aided signal processing, and most operating systems of so-called ''freely programmable analyzers'' in nuclear wards have compilers for this purpose. However, FORTRAN is not an interactive language and thus not suited for conversational computing as a man-machine interface. BASIC, on the other hand, although a useful starting language for beginners, is not sufficiently sophisticated for complex nuclear medicine problems involving detailed calculations. Integration of new methods of signal acquisition, processing and presentation into an existing system or generation of new systems is difficult in FORTRAN, BASIC or ASSEMBLER and can only be done by system specialists, not by nuclear physicians. This problem may be solved by suitable interactive systems that are easy to learn, flexible, transparent and user-friendly. An interactive system of this type, XDS, was developed in the course of a project on evaluation of radiological image sequences. An XDS-generated command processing system for signal and image processing in nuclear medicine is described. The system is characterized by interactive program development and execution, problem-relevant data types, a flexible procedure concept and an integrated system implementation language for modern image processing algorithms. The advantages of the interactive system are illustrated by an example of diagnosis by nuclear methods. (orig.) [de

  1. Maternal Communicative Behaviours and Interaction Quality as Predictors of Language Development: Findings from a Community-Based Study of Slow-to-Talk Toddlers

    Science.gov (United States)

    Conway, Laura J.; Levickis, Penny A.; Smith, Jodie; Mensah, Fiona; Wake, Melissa; Reilly, Sheena

    2018-01-01

    Background: Identifying risk and protective factors for language development informs interventions for children with developmental language disorder (DLD). Maternal responsive and intrusive communicative behaviours are associated with language development. Mother-child interaction quality may influence how children use these behaviours in language…

  2. The Peculiarities of the Adverbs Functioning of the Dialect Spoken in the v. Shevchenkove, Kiliya district, Odessa Region

    Directory of Open Access Journals (Sweden)

    Maryna Delyusto

    2013-08-01

    Full Text Available The article gives new evidence about the adverb as a part of the grammatical system of the Ukrainian steppe dialect spread in the area between the Danube and the Dniester rivers. The author proves that the grammatical system of the dialect spoken in the v. Shevchenkove, Kiliya district, Odessa region is determined by the historical development of the Ukrainian language rather than the influence of neighboring dialects.

  3. language choice, code-switching and code- mixing in biase

    African Journals Online (AJOL)

    Ada

    Finance and Economic Planning, Cross River and Akwa ... See Table 1. Table 1: Indigenous Languages Spoken in Biase ... used in education, in business, in religion, in the media ... far back as the seventeenth (17th) century (King. 1844).

  4. The language of football

    DEFF Research Database (Denmark)

    Rossing, Niels Nygaard; Skrubbeltrang, Lotte Stausgaard

    2014-01-01

    levels (Schein, 2004) in which each player and his actions can be considered an artefact - a concrete symbol in motion embedded in espoused values and basic assumptions. Therefore, the actions of each dialect are strongly connected to the underlying understanding of football. By document and video......The language of football: A cultural analysis of selected World Cup nations. This essay describes how actions on the football field relate to the nations’ different cultural understanding of football and how these actions become spoken dialects within a language of football. Saussure reasoned...... language to have two components: a language system and language users (Danesi, 2003). Consequently, football can be characterized as a language containing a system with specific rules of the game and users with actual choices and actions within the game. All football players can be considered language...

  5. The Road to Language Learning Is Not Entirely Iconic: Iconicity, Neighborhood Density, and Frequency Facilitate Acquisition of Sign Language.

    Science.gov (United States)

    Caselli, Naomi K; Pyers, Jennie E

    2017-07-01

    Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.

  6. Maternal communicative behaviours and interaction quality as predictors of language development: findings from a community-based study of slow-to-talk toddlers.

    Science.gov (United States)

    Conway, Laura J; Levickis, Penny A; Smith, Jodie; Mensah, Fiona; Wake, Melissa; Reilly, Sheena

    2018-03-01

    Identifying risk and protective factors for language development informs interventions for children with developmental language disorder (DLD). Maternal responsive and intrusive communicative behaviours are associated with language development. Mother-child interaction quality may influence how children use these behaviours in language learning. To identify (1) communicative behaviours and interaction quality associated with language outcomes; (2) whether the association between a maternal intrusive behaviour (directive) and child language scores changed alongside a maternal responsive behaviour (expansion); and (3) whether interaction quality modified these associations. Language skills were assessed at 24, 36 and 48 months in 197 community-recruited children who were slow to talk at 18 months. Mothers and 24-month-olds were video-recorded playing at home. Maternal praise, missed opportunities, and successful and unsuccessful directives (i.e., whether followed by the child) were coded during a 10-min segment. Interaction quality was rated using a seven-point fluency and connectedness (FC) scale, during a 5-min segment. Linear regressions examined associations between these behaviours/rating and language scores. Interaction analysis and simple slopes explored effect modification by FC. There was no evidence that missed opportunities or praise were associated with language scores. Higher rates of successful directives in the unadjusted model and unsuccessful directives in the adjusted model were associated with lower 24-month-old receptive language scores (e.g., unsuccessful directives effect size (ES) = -0.41). The association between unsuccessful directives and receptive language was weaker when adjusting for co-occurring expansions (ES = -0.34). Both types of directives were associated with poorer receptive and expressive language scores in adjusted models at 36 and 48 months (e.g., unsuccessful directive and 48-month receptive language, ES = -0.66). FC was

  7. Classifying a Person's Degree of Accessibility From Natural Body Language During Social Human-Robot Interactions.

    Science.gov (United States)

    McColl, Derek; Jiang, Chuan; Nejat, Goldie

    2017-02-01

    For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately during social human-robot interactions (HRIs). In this paper, we present and discuss social HRI experiments we have conducted to investigate the development of an accessibility-aware social robot able to autonomously determine a person's degree of accessibility (rapport, openness) toward the robot based on the person's natural static body language. In particular, we present two one-on-one HRI experiments to: 1) determine the performance of our automated system in being able to recognize and classify a person's accessibility levels and 2) investigate how people interact with an accessibility-aware robot which determines its own behaviors based on a person's speech and accessibility levels.

  8. "We communicated that way for a reason": language practices and language ideologies among hearing adults whose parents are deaf.

    Science.gov (United States)

    Pizer, Ginger; Walters, Keith; Meier, Richard P

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."

  9. Imitation interacts with one's second-language phonology but it does not operate cross-linguistically

    NARCIS (Netherlands)

    Podlipský, V.J.; Šimáčková, Š.; Chládková, K.

    2013-01-01

    This study explored effects of simultaneous use of late bilinguals’ languages on their second-language (L2) pronunciation. We tested (1) if bilinguals effectively inhibit the first language (L1) when simultaneously processing L1 and L2, (2) if bilinguals, like natives, imitate subphonemic variation,

  10. A Comparison of the Linguistic and Interactional Features of Language Learning Websites and Textbooks

    Science.gov (United States)

    Kong, Kenneth

    2009-01-01

    Self-study is playing an increasingly important role in the learning and instruction of many subjects, including second and foreign languages. With the rapid development of the internet, language websites for self-study are flourishing. While the language of print-based teaching materials has received some attention, the linguistic and…

  11. Comparison of Word Intelligibility in Spoken and Sung Phrases

    Directory of Open Access Journals (Sweden)

    Lauren B. Collister

    2008-09-01

    Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.

  12. Second Language Listening Instruction: Comparing a Strategies-Based Approach with an Interactive, Strategies/Bottom-Up Skills Approach

    Science.gov (United States)

    Yeldham, Michael

    2016-01-01

    This quasi-experimental study compared a strategies approach to second language listening instruction with an interactive approach, one combining a roughly equal balance of strategies and bottom-up skills. The participants were lower-intermediate-level Taiwanese university EFL learners, who were taught for 22 hours over one and a half semesters.…

  13. The home literacy environment: exploring how media and parent-child interactions are associated with children’s language production

    NARCIS (Netherlands)

    Liebeskind, K.G.; Piotrowski, J.; Lapierre, M.A.; Linebarger, D.L.

    2014-01-01

    Children who start school with strong language skills initiate a trajectory of academic success, while children with weaker skills are likely to struggle. Research has demonstrated that media and parent-child interactions, both characteristics of the home literacy environment, influence children's

  14. Sequence Text Structure Intervention during Interactive Book Reading of Expository Picture Books with Preschool Children with Language Impairment

    Science.gov (United States)

    Breit-Smith, Allison; Olszewski, Arnold; Swoboda, Christopher; Guo, Ying; Prendeville, Jo-Anne

    2017-01-01

    This study explores the outcomes of an interactive book reading intervention featuring expository picture books. This small-group intervention was delivered by four practitioners (two early childhood special education teachers and two speech-language pathologists) three times per week for 8 weeks to 6 preschool-age children (3 years 1 month to 4…

  15. Impacts of Teacher-Child Managed Whole-Group Language and Literacy Instruction on the Depth of Preschoolers' Social Interaction

    Science.gov (United States)

    Lin, Tzu-Jung; Justice, Laura M.; Emery, Alyssa A.; Mashburn, Andrew J.; Pentimonti, Jill M.

    2017-01-01

    Research Findings: This study examined the potential impacts of ongoing participation (twice weekly for 30 weeks) in teacher-child managed whole-group language and literacy instruction on prekindergarten children's social interaction with classmates. Teacher-child managed whole-group instruction that provides children with opportunities to engage…

  16. Anniversary Article--Interactional Feedback in Second Language Teaching and Learning: A Synthesis and Analysis of Current Research

    Science.gov (United States)

    Nassaji, Hossein

    2016-01-01

    The role of interactional feedback has long been of interest to both second language acquisition researchers and teachers and has continued to be the object of intensive empirical and theoretical inquiry. In this article, I provide a synthesis and analysis of recent research and developments in this area and their contributions to second language…

  17. Germanic heritage languages in North America: Acquisition, attrition and change

    OpenAIRE

    Johannessen, Janne Bondi; Salmons, Joseph C.; Westergaard, Marit; Anderssen, Merete; Arnbjörnsdóttir, Birna; Allen, Brent; Pierce, Marc; Boas, Hans C.; Roesch, Karen; Brown, Joshua R.; Putnam, Michael; Åfarli, Tor A.; Newman, Zelda Kahan; Annear, Lucas; Speth, Kristin

    2015-01-01

    This book presents new empirical findings about Germanic heritage varieties spoken in North America: Dutch, German, Pennsylvania Dutch, Icelandic, Norwegian, Swedish, West Frisian and Yiddish, and varieties of English spoken both by heritage speakers and in communities after language shift. The volume focuses on three critical issues underlying the notion of ‘heritage language’: acquisition, attrition and change. The book offers theoretically-informed discussions of heritage language processe...

  18. The road to language learning is iconic: evidence from British Sign Language.

    Science.gov (United States)

    Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella

    2012-12-01

    An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.

  19. Toward a tactile language for human-robot interaction: two studies of tacton learning and performance.

    Science.gov (United States)

    Barber, Daniel J; Reinerman-Jones, Lauren E; Matthews, Gerald

    2015-05-01

    Two experiments were performed to investigate the feasibility for robot-to-human communication of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence. Improvements in autonomous systems technology and a growing demand within military operations are spurring interest in communication via vibrotactile displays. Tactile communication may become an important element of human-robot interaction (HRI), but it requires the development of messaging capabilities approaching the communication power of the speech and visual signals used in the military. In Experiment 1 (N = 38), we trained participants to identify sets of directional, dynamic, and static tactons and tested performance and workload following training. In Experiment 2 (N = 76), we introduced an extended training procedure and tested participants' ability to correctly identify two-tacton phrases. We also investigated the impact of multitasking on performance and workload. Individual difference factors were assessed. Experiment 1 showed that participants found dynamic and static tactons difficult to learn, but the enhanced training procedure in Experiment 2 produced competency in performance for all tacton categories. Participants in the latter study also performed well on two-tacton phrases and when multitasking. However, some deficits in performance and elevation of workload were observed. Spatial ability predicted some aspects of performance in both studies. Participants may be trained to identify both single tactons and tacton phrases, demonstrating the feasibility of developing a tactile language for HRI. Tactile communication may be incorporated into multi-modal communication systems for HRI. It also has potential for human-human communication in challenging environments. © 2014, Human Factors and Ergonomics Society.

  20. Moving conceptualizations of language and literacy in SLA

    DEFF Research Database (Denmark)

    Laursen, Helle Pia

    in various technological environments, we see an increase in scholarship that highlights the mixing and chaining of spoken, written and visual modalities and how written and visual often precede or overrule spoken language. There seems to be a mismatch between current day language practices......, in language education and in language practices. As a consequence of this and in the light of the increasing mobility and linguistic diversity in Europe, in this colloquium, we address the need for a (re)conceptualization of the relation between language and literacy. Drawing on data from different settings...

  1. Use of Spoken and Written Japanese Did Not Protect Japanese-American Men From Cognitive Decline in Late Life

    Science.gov (United States)

    Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-01-01

    Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282

  2. Speech, gesture and the origins of language

    NARCIS (Netherlands)

    Levelt, W.J.M.

    2004-01-01

    During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt’s monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in

  3. Iconic Factors and Language Word Order

    Science.gov (United States)

    Moeser, Shannon Dawn

    1975-01-01

    College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)

  4. Interplay of Languaging and Gameplay: Player-Game Interactions as Ecologies for Languaging and Situated L2 Development

    Science.gov (United States)

    Ibrahim, Karim Hesham Shaker

    2016-01-01

    The field of game-mediated L2 learning has grown exponentially, and much has been discovered about the potentials of game-mediated interactions for L2 development, yet the fine-grained dynamics of player-game interactions and how they come to facilitate and afford L2 development are still largely underexplored. To address this gap in the…

  5. Interactivity in the Teaching and Learning of Foreign Languages: What It Means for Resourcing and Delivery of Online and Blended Programmes

    Science.gov (United States)

    Tudini, Vincenza

    2018-01-01

    University students who enrol in foreign language (FL) programmes are motivated by various needs, but in particular the need to achieve communicative fluency, which generally requires interaction with others. This study therefore explores the notion of 'interactivity,' as conceptualised in second language learning theories and how it might be…

  6. ASSESSING THE SO CALLED MARKED INFLECTIONAL FEATURES OF NIGERIAN ENGLISH: A SECOND LANGUAGE ACQUISITION THEORY ACCOUNT

    OpenAIRE

    Boluwaji Oshodi

    2014-01-01

    There are conflicting claims among scholars on whether the structural outputs of the types of English spoken in countries where English is used as a second language gives such speech forms the status of varieties of English. This study examined those morphological features considered to be marked features of the variety spoken in Nigeria according to Kirkpatrick (2011) and the variety spoken in Malaysia by considering the claims of the Missing Surface Inflection Hypothesis (MSIH) a Second Lan...

  7. Sign Language Interpreting in Theatre: Using the Human Body to Create Pictures of the Human Soul

    Directory of Open Access Journals (Sweden)

    Michael Richardson

    2017-06-01

    Full Text Available This paper explores theatrical interpreting for Deaf spectators, a specialism that both blurs the separation between translation and interpreting, and replaces these potentials with a paradigm in which the translator's body is central to the production of the target text. Meaningful written translations of dramatic texts into sign language are not currently possible. For Deaf people to access Shakespeare or Moliere in their own language usually means attending a sign language interpreted performance, a typically disappointing experience that fails to provide accessibility or to fulfil the potential of a dynamically equivalent theatrical translation. I argue that when such interpreting events fail, significant contributory factors are the challenges involved in producing such a target text and the insufficient embodiment of that text. The second of these factors suggests that the existing conference and community models of interpreting are insufficient in describing theatrical interpreting. I propose that a model drawn from Theatre Studies, namely psychophysical acting, might be more effective for conceptualising theatrical interpreting. I also draw on theories from neurological research into the Mirror Neuron System to suggest that a highly visual and physical approach to performance (be that by actors or interpreters is more effective in building a strong actor-spectator interaction than a performance in which meaning is conveyed by spoken words. Arguably this difference in language impact between signed and spoken is irrelevant to hearing audiences attending spoken language plays, but I suggest that for all theatre translators the implications are significant: it is not enough to create a literary translation as the target text; it is also essential to produce a text that suggests physicality. The aim should be the creation of a text which demands full expression through the body, the best picture of the human soul and the fundamental medium

  8. Processing spoken lectures in resource-scarce environments

    CSIR Research Space (South Africa)

    Van Heerden, CJ

    2011-11-01

    Full Text Available and then adapting or training new models using the segmented spoken lectures. The eventual systems perform quite well, aligning more than 90% of a selected set of target words successfully....

  9. Autosegmental Representation of Epenthesis in the Spoken French ...

    African Journals Online (AJOL)

    Nneka Umera-Okeke

    ... spoken French of IUFLs. Key words: IUFLs, Epenthensis, Ijebu dialect, Autosegmental phonology .... Ambiguities may result: salmi "strait" vs. salami. (An exception is that in .... tiers of segments. In the picture given us by classical generative.

  10. Mother-Child Interaction and Early Language Skills in Children Born to Mothers with Substance Abuse and Psychiatric Problems.

    Science.gov (United States)

    J Haabrekke, Kristin; Siqveland, Torill; Smith, Lars; Wentzel-Larsen, Tore; Walhovd, Kristine B; Moe, Vibeke

    2015-10-01

    This prospective, longitudinal study with data collected at four time points investigated how maternal psychiatric symptoms, substance abuse and maternal intrusiveness in interaction were related to early child language skills. Three groups of mothers were recruited during pregnancy: One from residential treatment institutions for substance abuse (n = 18), one from psychiatric outpatient treatment (n = 22) and one from well-baby clinics (n = 30). Maternal substance abuse and anti-social and borderline personality traits were assessed during pregnancy, postpartum depression at 3 months, maternal intrusiveness in interaction at 12 months, and child language skills at 2 years. Results showed that the mothers in the substance abuse group had the lowest level of education, they were younger and they were more likely to be single mothers than the mothers in the two other groups. There was a significant difference in expressive language between children born to mothers with substance abuse problems and those born to comparison mothers, however not when controlling for maternal age, education and single parenthood. No group differences in receptive language skills were detected. Results further showed that maternal intrusiveness observed in mother-child interaction at 12 months was significantly related to child expressive language at 2 years, also when controlling for socio-demographic risk factors. This suggests that in addition to addressing substance abuse and psychiatric problems, there is a need for applying treatment models promoting sensitive caregiving, in order to enhance child expressive language skills.

  11. Aphasia, an acquired language disorder

    African Journals Online (AJOL)

    2009-10-11

    Oct 11, 2009 ... In this article we will review the types of aphasia, an approach to its diagnosis, aphasia subtypes, rehabilitation and prognosis. ... language processing in both the written and spoken forms.6 ... The angular gyrus (Brodman area 39) is located at the .... of his or her quality of life, emotional state, sense of well-.

  12. Talker and background noise specificity in spoken word recognition memory

    OpenAIRE

    Cooper, Angela; Bradlow, Ann R.

    2017-01-01

    Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...

  13. Spectrotemporal processing drives fast access to memory traces for spoken words.

    Science.gov (United States)

    Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C

    2012-05-01

    The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Real-time lexical comprehension in young children learning American Sign Language.

    Science.gov (United States)

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  15. Discovering Language through Augmented Reality and the Interactive Digital White Board

    Directory of Open Access Journals (Sweden)

    Sandra Ruperta Pérez-Lisboa

    2017-08-01

    Full Text Available This study analyzed the development of phonological, semantic, and syntactic aspects by using augmented reality and interactive whiteboard with boys and girls in the kindergarten of Liceo San Felipe, San Felipe, Chili. With the implementation of these tools, learning experiences were carried out, enhancing the understanding of sentences and words in their successive components: linguistic segmentation, phonological awareness, and reflection on the meaning of words and sentences. The experiments were carried out in a didactic classroom of the course of Educacion Parvularia (Pre-School Education at the University of Playa Ancha, San Felipe Campus, for 60 minutes, once a week for four months. It was a quasi-experimental study, and through pre- and post-tests, it was possible to verify the development of 18 children of a municipal school in San Felipe. The instruments used were the Linguistic Segmentation Test, Comprehensive and Expressive Language Examination Test (ELCE; Subtest semantic aspect, Test Evaluation O; Subtest words and phrases. The results, based on the comparison of pre- and post-test, showed changes in the management of the semantic, syntactic, and phonological aspects achieved by the children with this methodology. However, more research is needed to validate this proposal in teaching metalinguistic.

  16. SoS Notebook: An Interactive Multi-Language Data Analysis Environment.

    Science.gov (United States)

    Peng, Bo; Wang, Gao; Ma, Jun; Leong, Man Chong; Wakefield, Chris; Melott, James; Chiu, Yulun; Du, Di; Weinstein, John N

    2018-05-22

    Complex bioinformatic data analysis workflows involving multiple scripts in different languages can be difficult to consolidate, share, and reproduce. An environment that streamlines the entire processes of data collection, analysis, visualization and reporting of such multi-language analyses is currently lacking. We developed Script of Scripts (SoS) Notebook, a web-based notebook environment that allows the use of multiple scripting language in a single notebook, with data flowing freely within and across languages. SoS Notebook enables researchers to perform sophisticated bioinformatic analysis using the most suitable tools for different parts of the workflow, without the limitations of a particular language or complications of cross-language communications. SoS Notebook is hosted at http://vatlab.github.io/SoS/ and is distributed under a BSD license. bpeng@mdanderson.org.

  17. Effects of maternal sensitivity and cognitive and linguistic stimulation on cochlear implant users' language development over four years.

    Science.gov (United States)

    Quittner, Alexandra L; Cruz, Ivette; Barker, David H; Tobey, Emily; Eisenberg, Laurie S; Niparko, John K

    2013-02-01

    To examine the effects of observed maternal sensitivity (MS), cognitive stimulation (CS), and linguistic stimulation on the 4-year growth of oral language in young, deaf children receiving a cochlear implant. Previous studies of cochlear implants have not considered the effects of parental behaviors on language outcomes. In this prospective, multisite study, we evaluated parent-child interactions during structured and unstructured play tasks and their effects on oral language development in 188 deaf children receiving a cochlear implant and 97 normal-hearing children as controls. Parent-child interactions were rated on a 7-point scale using the National Institute of Child Health and Human Development's Early Childcare Study codes, which have well-established psychometric properties. Language was assessed using the MacArthur Bates Communicative Development Inventories, the Reynell Developmental Language Scales, and the Comprehensive Assessment of Spoken Language. We used mixed longitudinal modeling to test our hypotheses. After accounting for early hearing experience and child and family demographics, MS and CS predicted significant increases in the growth of oral language. Linguistic stimulation was related to language growth only in the context of high MS. The magnitude of effects of MS and CS on the growth of language was similar to that found for age at cochlear implantation, suggesting that addressing parenting behaviors is a critical target for early language learning after implantation. Copyright © 2013 Mosby, Inc. All rights reserved.

  18. Australian Aboriginal Deaf People and Aboriginal Sign Language

    Science.gov (United States)

    Power, Des

    2013-01-01

    Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…

  19. Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy

    2016-01-01

    Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…

  20. How Facebook Can Revitalise Local Languages: Lessons from Bali

    Science.gov (United States)

    Stern, Alissa Joy

    2017-01-01

    For a language to survive, it must be spoken and passed down to the next generation. But how can we engage teenagers--so crucial for language transmission--to use and value their local tongue when they are bombarded by pressures from outside and from within their society to only speak national and international languages? This paper analyses the…