Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…
Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.
Jacques Melitz; Farid Toubal
We construct new series for common native language and common spoken language for 195 countries, which we use together with series for common official language and linguis-tic proximity in order to draw inferences about (1) the aggregate impact of all linguistic factors on bilateral trade, (2) whether the linguistic influences come from ethnicity and trust or ease of communication, and (3) in so far they come from ease of communication, to what extent trans-lation and interpreters play a role...
Spoken language corpora for the nine official African languages of South Africa. Jens Allwood, AP Hendrikse. Abstract. In this paper we give an outline of a corpus planning project which aims to develop linguistic resources for the nine official African languages of South Africa in the form of corpora, more specifically spoken ...
Crowe, Kathryn; McLeod, Sharynne
The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…
Bedny, Marina; Richardson, Hilary; Saxe, Rebecca
Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.
Nicodemus, Brenda; Emmorey, Karen
Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…
This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…
Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.
Assessing spoken-language educational interpreting: Measuring up and measuring right. Lenelle Foster, Adriaan Cupido. Abstract. This article, primarily, presents a critical evaluation of the development and refinement of the assessment instrument used to assess formally the spoken-language educational interpreters at ...
Apr 12, 2018 ... sound of that language. These language-specific properties can be exploited to identify a spoken language reliably. Automatic language identification has emerged as a prominent research area in. Indian languages processing. People from different regions of India speak around 800 different languages.
Parisse , Christophe; Le Normand , Marie-Thérèse
International audience; The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automa...
Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda
.... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...
The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...
Full Text Available This article introduces the first Spoken Language Identification system developed to distinguish among all eleven of South Africa’s official languages. The PPR-LM (Parallel Phoneme Recognition followed by Language Modeling) architecture...
assessment instrument used to assess formally the spoken-language educational interpreters at. Stellenbosch University (SU). Research ..... Is the interpreter suited to the module? Is the interpreter easier to follow? Technical. Microphone technique. Lag. Completeness. Language use. Vocabulary. Role. Personal Objectives ...
This article examines the impact of the hegemony of English, as a common lingua franca, referred to as a global language, on the indigenous languages spoken in Nigeria. Since English, through the British political imperialism and because of the economic supremacy of English dominated countries, has assumed the ...
Thothathiri, Malathi; Snedeker, Jesse
Syntactic priming during language production is pervasive and well-studied. Hearing, reading, speaking or writing a sentence with a given structure increases the probability of subsequently producing the same structure, regardless of whether the prime and target share lexical content. In contrast, syntactic priming during comprehension has proven more elusive, fueling claims that comprehension is less dependent on general syntactic representations and more dependent on lexical knowledge. In three experiments we explored syntactic priming during spoken language comprehension. Participants acted out double-object (DO) or prepositional-object (PO) dative sentences while their eye movements were recorded. Prime sentences used different verbs and nouns than the target sentences. In target sentences, the onset of the direct-object noun was consistent with both an animate recipient and an inanimate theme, creating a temporary ambiguity in the argument structure of the verb (DO e.g., Show the horse the book; PO e.g., Show the horn to the dog). We measured the difference in looks to the potential recipient and the potential theme during the ambiguous interval. In all experiments, participants who heard DO primes showed a greater preference for the recipient over the theme than those who heard PO primes, demonstrating across-verb priming during online language comprehension. These results accord with priming found in production studies, indicating a role for abstract structural information during comprehension as well as production.
Vaughn, Charlotte R; Bradlow, Ann R
While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3). Results demonstrate that language-being-spoken is integrated in processing with each of the other dimensions tested, and that these processing dependencies seem to be independent of listeners' bilingual status or experience with the languages tested. Moreover, the data reveal processing interference asymmetries, suggesting a processing hierarchy for indexical, non-linguistic speech features.
Huettig, Falk; Brouwer, Susanne
It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.
Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois
The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.
Full Text Available the carefully selected training data used to construct the system initially. The authors investigated the process of porting a Spoken Language Identification (S-LID) system to a new environment and describe methods to prepare it for more effective use...
Parisse, C; Le Normand, M T
The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automatic disambiguation of lexical tags. The tool presented here (POST) can tag and disambiguate a large text in a few seconds. This tool complements systems dealing with language transcription and suggests further theoretical developments in the assessment of the status of morphosyntax in spoken language corpora. The program currently works for French and English, but it can be easily adapted for use with other languages. The analysis and computation of a corpus produced by normal French children 2-4 years of age, as well as of a sample corpus produced by French SLI children, are given as examples.
Locke, John L.
A major synthesis of the latest research on early language acquisition, this book explores what gives infants the remarkable capacity to progress from babbling to meaningful sentences, and what inclines a child to speak. The book examines the neurological, perceptual, social, and linguistic aspects of language acquisition in young children, from…
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. In Experiment 1, 69 children with TLD (7-10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7-12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070
Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.
Full Text Available rates when no Japanese acoustic models are constructed. An increasing amount of Japanese training data is used to train the language classifier of an English-only (E), an English-French (EF), and an English-French-Portuguese PPR system. ple.... Experimental design 3.1. Corpora Because of their role as world languages that are widely spoken in Africa, our initial LID system was designed to distinguish between English, French and Portuguese. We therefore trained phone recognizers and language...
Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann
Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…
Davidson, Kathryn; Lillo-Martin, Diane; Chen Pichler, Deborah
Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken En...
Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin
Moeller, Aleidine J.; Theiler, Janine
Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…
Office of English Language Acquisition, US Department of Education, 2015
The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…
Nippold, Marilyn A; Frantz-Kaspar, Megan W; Vigeland, Laura M
In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language impairment. Forty adults (mean age = 22 years, 10 months) with typical language development participated in an interview that consisted of 3 speaking tasks: a general conversation about common, everyday topics; a narrative retelling task that involved fables; and a question-and-answer, critical-thinking task about the fables. Each speaker's interview was audio-recorded, transcribed, broken into communication units, coded for main and subordinate clauses, entered into Systematic Analysis of Language Transcripts (Miller, Iglesias, & Nockerts, 2004), and analyzed for mean length of communication unit and clausal density. Both the narrative and critical-thinking tasks elicited significantly greater syntactic complexity than the conversational task. It was also found that syntactic complexity was significantly greater during the narrative task than the critical-thinking task. Syntactic complexity was best revealed by a narrative task that involved fables. The study offers benchmarks for language development during early adulthood.
Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M
The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.
Barberà, Gemma; Zwets, Martine
In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…
Jul 1, 2009 ... correct language that has been acquired through listening. The Brewsters17 suggest an 'immersion experience' by living with speakers of the language. Ellis included several of their tools, such as loop tapes, as being useful in a consultation when learning a language.15 Others disagree with a purely.
Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae
Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.
BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken
Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.
Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken
Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung
Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.
Remington, Robert J.
Leaders within the Information Technology (IT) industry are expressing a general concern that the products used to deliver and manage today's communications network capabilities require far too much effort to learn and to use, even by highly skilled and increasingly scarce support personnel. The usability of network management systems must be significantly improved if they are to deliver the performance and quality of service needed to meet the ever-increasing demand for new Internet-based information and services. Fortunately, recent advances in spoken language (SL) interface technologies show promise for significantly improving the usability of most interactive IT applications, including network management systems. The emerging SL interfaces will allow users to communicate with IT applications through words and phases -- our most familiar form of everyday communication. Recent advancements in SL technologies have resulted in new commercial products that are being operationally deployed at an increasing rate. The present paper describes a project aimed at the application of new SL interface technology for improving the usability of an advanced network management system. It describes several SL interface features that are being incorporated within an existing system with a modern graphical user interface (GUI), including 3-D visualization of network topology and network performance data. The rationale for using these SL interface features to augment existing user interfaces is presented, along with selected task scenarios to provide insight into how a SL interface will simplify the operator's task and enhance overall system usability.
Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…
Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…
To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently
Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric
WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.
Shaw, Emily P.
This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…
de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo
Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J
In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.
Peters, Sara A; Boiteau, Timothy W; Almor, Amit
The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis.
Doshi, Finale; Roy, Nicholas
Spoken language is one of the most intuitive forms of interaction between humans and agents. Unfortunately, agents that interact with people using natural language often experience communication errors and do not correctly understand the user's intentions. Recent systems have successfully used probabilistic models of speech, language and user behaviour to generate robust dialogue performance in the presence of noisy speech recognition and ambiguous language choices, but decisions made using these probabilistic models are still prone to errors owing to the complexity of acquiring and maintaining a complete model of human language and behaviour. In this paper, a decision-theoretic model for human-robot interaction using natural language is described. The algorithm is based on the Partially Observable Markov Decision Process (POMDP), which allows agents to choose actions that are robust not only to uncertainty from noisy or ambiguous speech recognition but also unknown user models. Like most dialogue systems, a POMDP is defined by a large number of parameters that may be difficult to specify a priori from domain knowledge, and learning these parameters from the user may require an unacceptably long training period. An extension to the POMDP model is described that allows the agent to acquire a linguistic model of the user online, including new vocabulary and word choice preferences. The approach not only avoids a training period of constant questioning as the agent learns, but also allows the agent actively to query for additional information when its uncertainty suggests a high risk of mistakes. The approach is demonstrated both in simulation and on a natural language interaction system for a robotic wheelchair application.
Hampton, L. H.; Kaiser, A. P.
Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…
Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.
Rubin, H; Kantor, M; Macnab, J
Experiments examined grammatical judgement, and error-identification deficits in relation to expressive language skills and to morphemic errors in writing. Language-disabled subjects did not differ from language-matched controls on judgement, revision, or error identification. Age-matched controls represented more morphemes in elicited writing than either of the other groups, which were equivalent. However, in spontaneous writing, language-disabled subjects made more frequent morphemic errors than age-matched controls, but language-matched subjects did not differ from either group. Proficiency relative to academic experience and oral language status and to remedial implications are discussed.
Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa
Nicholas, Johanna G.; Geers, Ann E.
Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…
Pisoni, David B.
This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…
Carrero Pérez, Nubia Patricia
Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…
McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard
Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…
Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José
Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…
Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna
Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in each group achieved benchmarks for the first stage of functional spoken language development, as defined by Tager-Flusberg et al. (J Speech Lang Hear Res, 52: 643-652, 2009). Analyses of moderators of treatment suggest that joint attention moderates response to both treatments, and children with better receptive language pre-treatment do better with the naturalistic method, while those with lower receptive language show better response to the discrete trial treatment. The implications of these findings are discussed.
Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre
To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.
Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity
Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid
Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple
Johan Frijns; prof. Dr. Louis Peeraer; van Wieringen; Ingeborg Dhooge; Vermeulen; Jan Brokx; Tinne Boons; Wouters
Objectives: Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to
Li, Xiao-qing; Ren, Gui-qin
An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…
Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language
Eisenberg, Laurie S; Fisher, Laurel M; Johnson, Karen C; Ganguly, Dianne Hammes; Grace, Thelma; Niparko, John K
We investigated associations between sentence recognition and spoken language for children with cochlear implants (CI) enrolled in the Childhood Development after Cochlear Implantation (CDaCI) study. In a prospective longitudinal study, sentence recognition percent-correct scores and language standard scores were correlated at 48-, 60-, and 72-months post-CI activation. Six tertiary CI centers in the United States. Children with CIs participating in the CDaCI study. Cochlear implantation. Sentence recognition was assessed using the Hearing In Noise Test for Children (HINT-C) in quiet and at +10, +5, and 0 dB signal-to-noise ratio (S/N). Spoken language was assessed using the Clinical Assessment of Spoken Language (CASL) core composite and the antonyms, paragraph comprehension (syntax comprehension), syntax construction (expression), and pragmatic judgment tests. Positive linear relationships were found between CASL scores and HINT-C sentence scores when the sentences were delivered in quiet and at +10 and +5 dB S/N, but not at 0 dB S/N. At 48 months post-CI, sentence scores at +10 and +5 dB S/N were most strongly associated with CASL antonyms. At 60 and 72 months, sentence recognition in noise was most strongly associated with paragraph comprehension and syntax construction. Children with CIs learn spoken language in a variety of acoustic environments. Despite the observed inconsistent performance in different listening situations and noise-challenged environments, many children with CIs are able to build lexicons and learn the rules of grammar that enable recognition of sentences.
Wilang, Jeffrey Dawala; Sinwongsuwat, Kemtong
This year is designated as Thailand's "English Speaking Year" with the aim of improving the communicative competence of Thais for the upcoming integration of the Association of Southeast Asian Nations (ASEAN) in 2015. The consistent low-level proficiency of the Thais in the English language has led to numerous curriculum revisions and…
le Fevre Jakobsen, Bjarne
with well-edited material, in 1965, to an anchor who hands over to journalists in live feeds from all over the world via satellite, Skype, or mobile telephone, in 2011. The narrative rhythm is faster and sometimes more spontaneous. In this article we will discuss aspects of the use of language and the tempo...
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
Maldonado Torres, Sonia Enid
The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…
Paladino, Jonathan D; Crooke, Philip S; Brackney, Christopher R; Kaynar, A Murat; Hotchkiss, John R
Medical care commonly involves the apprehension of complex patterns of patient derangements to which the practitioner responds with patterns of interventions, as opposed to single therapeutic maneuvers. This complexity renders the objective assessment of practice patterns using conventional statistical approaches difficult. Combinatorial approaches drawn from symbolic dynamics are used to encode the observed patterns of patient derangement and associated practitioner response patterns as sequences of symbols. Concatenating each patient derangement symbol with the contemporaneous practitioner response symbol creates "words" encoding the simultaneous patient derangement and provider response patterns and yields an observed vocabulary with quantifiable statistical characteristics. A fundamental observation in many natural languages is the existence of a power law relationship between the rank order of word usage and the absolute frequency with which particular words are uttered. We show that population level patterns of patient derangement: practitioner intervention word usage in two entirely unrelated domains of medical care display power law relationships similar to those of natural languages, and that-in one of these domains-power law behavior at the population level reflects power law behavior at the level of individual practitioners. Our results suggest that patterns of medical care can be approached using quantitative linguistic techniques, a finding that has implications for the assessment of expertise, machine learning identification of optimal practices, and construction of bedside decision support tools.
Kasyidi, Fatan; Puji Lestari, Dessi
One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.
Williams, Joshua T.; Newman, Sharlene D.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…
This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
Full Text Available Language technologies, in particular machine translation applications, have the potential to help break down linguistic and cultural barriers, presenting an important contribution to the globalization and internationalization of the Portuguese language, by allowing content to be shared 'from' and 'to' this language. This article aims to present the research work developed at the Laboratory of Spoken Language Systems of INESC-ID in the field of machine translation, namely the automated speech translation, the translation of microblogs and the creation of a hybrid machine translation system. We will focus on the creation of the hybrid system, which aims at combining linguistic knowledge, in particular semantico-syntactic knowledge, with statistical knowledge, to increase the level of translation quality.
Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J
This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.
Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R
Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.
Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783
Hirschmüller, Sarah; Egloff, Boris
How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.
Full Text Available How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a positive emotion word usage base rates in spoken and written materials and (b positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.
Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G
This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.
Brennan-Jones, Christopher G; White, Jo; Rush, Robert W; Law, James
Congenital or early-acquired hearing impairment poses a major barrier to the development of spoken language and communication. Early detection and effective (re)habilitative interventions are essential for parents and families who wish their children to achieve age-appropriate spoken language. Auditory-verbal therapy (AVT) is a (re)habilitative approach aimed at children with hearing impairments. AVT comprises intensive early intervention therapy sessions with a focus on audition, technological management and involvement of the child's caregivers in therapy sessions; it is typically the only therapy approach used to specifically promote avoidance or exclusion of non-auditory facial communication. The primary goal of AVT is to achieve age-appropriate spoken language and for this to be used as the primary or sole method of communication. AVT programmes are expanding throughout the world; however, little evidence can be found on the effectiveness of the intervention. To assess the effectiveness of auditory-verbal therapy (AVT) in developing receptive and expressive spoken language in children who are hearing impaired. CENTRAL, MEDLINE, EMBASE, PsycINFO, CINAHL, speechBITE and eight other databases were searched in March 2013. We also searched two trials registers and three theses repositories, checked reference lists and contacted study authors to identify additional studies. The review considered prospective randomised controlled trials (RCTs) and quasi-randomised studies of children (birth to 18 years) with a significant (≥ 40 dBHL) permanent (congenital or early-acquired) hearing impairment, undergoing a programme of auditory-verbal therapy, administered by a certified auditory-verbal therapist for a period of at least six months. Comparison groups considered for inclusion were waiting list and treatment as usual controls. Two review authors independently assessed titles and abstracts identified from the searches and obtained full-text versions of all potentially
McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard
Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214
Christine E Potter
Full Text Available Five- and six-year-old children (n=160 participated in three studies designed to explore language discrimination. After an initial exposure period (during which children heard either an unfamiliar language, a familiar language, or music, children performed an ABX discrimination task involving two unfamiliar languages that were either similar (Spanish vs. Italian or different (Spanish vs. Mandarin. On each trial, participants heard two sentences spoken by two individuals, each spoken in an unfamiliar language. The pair was followed by a third sentence spoken in one of the two languages. Participants were asked to judge whether the third sentence was spoken by the first speaker or the second speaker. Across studies, both the difficulty of the discrimination contrast and the relation between exposure and test materials affected children’s performance. In particular, language discrimination performance was facilitated by an initial exposure to a different unfamiliar language, suggesting that experience can help tune children’s attention to the relevant features of novel languages.
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success
Feghali, Maksoud N.
This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…
Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias
Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.
Gautreau, Aurore; Hoen, Michel; Meunier, Fanny
This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.
Ramos-Sanchez, Jose Luis; Cuadrado-Gordillo, Isabel
This article presents the results of a quasi-experimental study of whether there exists a causal relationship between spoken language and the initial learning of reading/writing. The subjects were two matched samples each of 24 preschool pupils (boys and girls), controlling for certain relevant external variables. It was found that there was no…
Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie
Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…
Full Text Available The research and development of the Slovak spoken language dialogue system (SLDS is described in the paper. The dialogue system is based on the DARPA Communicator architecture and was developed in the period from July 2003 to June 2006. It consists of the Galaxy hub and telephony, automatic speech recognition, text-to-speech, backend, transport and VoiceXML dialogue management and automatic evaluation modules. The dialogue system is demonstrated and tested via two pilot applications, „Weather Forecast“ and „Public Transport Timetables“. The required information is retrieved from Internet resources in multi-user mode through PSTN, ISDN, GSM and/or VoIP network. Some innovation development has been performed since 2006 which is also described in the paper.
Choroomi, S; Curotta, J
To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.
Courtin, Cyril; Jobard, Gael; Vigneau, Mathieu; Beaucousin, Virginie; Razafimandimby, Annick; Hervé, Pierre-Yves; Mellet, Emmanuel; Zago, Laure; Petit, Laurent; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie
We used functional magnetic resonance imaging to investigate the areas activated by signed narratives in non-signing subjects naïve to sign language (SL) and compared it to the activation obtained when hearing speech in their mother tongue. A subset of left hemisphere (LH) language areas activated when participants watched an audio-visual narrative in their mother tongue was activated when they observed a signed narrative. The inferior frontal (IFG) and precentral (Prec) gyri, the posterior parts of the planum temporale (pPT) and of the superior temporal sulcus (pSTS), and the occipito-temporal junction (OTJ) were activated by both languages. The activity of these regions was not related to the presence of communicative intent because no such changes were observed when the non-signers watched a muted video of a spoken narrative. Recruitment was also not triggered by the linguistic structure of SL, because the areas, except pPT, were not activated when subjects listened to an unknown spoken language. The comparison of brain reactivity for spoken and sign languages shows that SL has a special status in the brain compared to speech; in contrast to unknown oral language, the neural correlates of SL overlap LH speech comprehension areas in non-signers. These results support the idea that strong relationships exist between areas involved in human action observation and language, suggesting that the observation of hand gestures have shaped the lexico-semantic language areas as proposed by the motor theory of speech. As a whole, the present results support the theory of a gestural origin of language. Copyright © 2010 Elsevier Inc. All rights reserved.
Methods. Qualitative individual interviews were conducted with seven doctors who had successfully learned the language of their patients, to determine their experiences and how they had succeeded. Results. All seven doctors used a combination of methods to learn the language. Listening was found to be very important, ...
Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B
Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Werfel, Krystal L.
Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…
Jansen, Stefanie; Wesselmeier, Hendrik; de Ruiter, Jan P; Mueller, Horst M
Even though research in turn-taking in spoken dialogues is now abundant, a typical EEG-signature associated with the anticipation of turn-ends has not yet been identified until now. The purpose of this study was to examine if readiness potentials (RP) can be used to study the anticipation of turn-ends by using it in a motoric finger movement and articulatory movement task. The goal was to determine the preconscious onset of turn-end anticipation in early, preconscious turn-end anticipation processes by the simultaneous registration of EEG measures (RP) and behavioural measures (anticipation timing accuracy, ATA). For our behavioural measures, we used both button-press and verbal response ("yes"). In the experiment, 30 subjects were asked to listen to auditorily presented utterances and press a button or utter a brief verbal response when they expected the end of the turn. During the task, a 32-channel-EEG signal was recorded. The results showed that the RPs during verbal- and button-press-responses developed similarly and had an almost identical time course: the RP signals started to develop 1170 vs. 1190 ms before the behavioural responses. Until now, turn-end anticipation is usually studied using behavioural methods, for instance by measuring the anticipation timing accuracy, which is a measurement that reflects conscious behavioural processes and is insensitive to preconscious anticipation processes. The similar time course of the recorded RP signals for both verbal- and button-press responses provide evidence for the validity of using RPs as an online marker for response preparation in turn-taking and spoken dialogue research. Copyright © 2014 Elsevier B.V. All rights reserved.
Auditory-Verbal Therapy (AVT) is an effective early intervention for children with hearing loss. The Hear and Say Centre in Brisbane offers AVT sessions to families soon after diagnosis, and about 20% of the families in Queensland participate via PC-based videoconferencing (Skype). Parent and therapist satisfaction with the telemedicine sessions was examined by questionnaire. All families had been enrolled in the telemedicine AVT programme for at least six months. Their average distance from the Hear and Say Centre was 600 km. Questionnaires were completed by 13 of the 17 parents and all five therapists. Parents and therapists generally expressed high satisfaction in the majority of the sections of the questionnaire, e.g. most rated the audio and video quality as good or excellent. All parents felt comfortable or as comfortable as face-to-face when discussing matters with the therapist online, and were satisfied or as satisfied as face-to-face with their level and their child's level of interaction/rapport with the therapist. All therapists were satisfied or very satisfied with the telemedicine AVT programme. The results demonstrate the potential of telemedicine service delivery for teaching listening and spoken language to children with hearing loss in rural and remote areas of Australia.
Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners
Petkov, Christopher I; Jarvis, Erich D
Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.
Petkov, Christopher I.; Jarvis, Erich D.
Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species. PMID:22912615
Scott, C M; Windsor, J
Language performance in naturalistic contexts can be characterized by general measures of productivity, fluency, lexical diversity, and grammatical complexity and accuracy. The use of such measures as indices of language impairment in older children is open to questions of method and interpretation. This study evaluated the extent to which 10 general language performance measures (GLPM) differentiated school-age children with language learning disabilities (LLD) from chronological-age (CA) and language-age (LA) peers. Children produced both spoken and written summaries of two educational videotapes that provided models of either narrative or expository (informational) discourse. Productivity measures, including total T-units, total words, and words per minute, were significantly lower for children with LLD than for CA children. Fluency (percent T-units with mazes) and lexical diversity (number of different words) measures were similar for all children. Grammatical complexity as measured by words per T-unit was significantly lower for LLD children. However, there was no difference among groups for clauses per T-unit. The only measure that distinguished children with LLD from both CA and LA peers was the extent of grammatical error. Effects of discourse genre and modality were consistent across groups. Compared to narratives, expository summaries were shorter, less fluent (spoken versions), more complex (words per T-unit), and more error prone. Written summaries were shorter and had more errors than spoken versions. For many LLD and LA children, expository writing was exceedingly difficult. Implications for accounts of language impairment in older children are discussed.
Leni Amalia Suek
Full Text Available The maintenance of community languages of migrant students is heavily determined by language use and language attitudes. The superiority of a dominant language over a community language contributes to attitudes of migrant students toward their native languages. When they perceive their native languages as unimportant language, they will reduce the frequency of using that language even though at home domain. Solutions provided for a problem of maintaining community languages should be related to language use and attitudes of community languages, which are developed mostly in two important domains, school and family. Hence, the valorization of community language should be promoted not only in family but also school domains. Several programs such as community language school and community language program can be used for migrant students to practice and use their native languages. Since educational resources such as class session, teachers and government support are limited; family plays significant roles to stimulate positive attitudes toward community language and also to develop the use of native languages.
Marchman, Virginia A; Fernald, Anne; Hurtado, Nereyda
Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children's facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children's ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language.
Adank, P.M.; Noordzij, M.L.; Hagoort, P.
A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a
João Mendonça Correia
Full Text Available Spoken word recognition and production require fast transformations between acoustic, phonological and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g. ‘paard’-‘horse’. Multivariate pattern analysis (MVPA was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination and generalize meaning across two languages (across-language generalization. Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in
McDuffie, Andrea; Banasik, Amy; Bullard, Lauren; Nelson, Sarah; Feigles, Robyn Tempero; Hagerman, Randi; Abbeduto, Leonard
A small randomized group design (N = 20) was used to examine a parent-implemented intervention designed to improve the spoken language skills of school-aged and adolescent boys with FXS, the leading cause of inherited intellectual disability. The intervention was implemented by speech-language pathologists who used distance video-teleconferencing to deliver the intervention. The intervention taught mothers to use a set of language facilitation strategies while interacting with their children in the context of shared story-telling. Treatment group mothers significantly improved their use of the targeted intervention strategies. Children in the treatment group increased the duration of engagement in the shared story-telling activity as well as use of utterances that maintained the topic of the story. Children also showed increases in lexical diversity, but not in grammatical complexity.
Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth
Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to
The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…
Brodie, Kara; Abel, Gary; Burt, Jenni
To investigate if language spoken at home mediates the relationship between ethnicity and doctor-patient communication for South Asian and White British patients. We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner-patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. There was strong evidence of an association between doctor-patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0-100) than White British patients (95% CI -4.9 to -1.1, p=0.002). This difference reduced to 1.4 points (95% CI -3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI -6.4 to -0.2). South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
KRISHNAMURTHI, M.G.; MCCORMACK, WILLIAM
THE TWENTY GRADED UNITS IN THIS TEXT CONSTITUTE AN INTRODUCTION TO BOTH INFORMAL AND FORMAL SPOKEN KANNADA. THE FIRST TWO UNITS PRESENT THE KANNADA MATERIAL IN PHONETIC TRANSCRIPTION ONLY, WITH KANNADA SCRIPT GRADUALLY INTRODUCED FROM UNIT III ON. A TYPICAL LESSON-UNIT INCLUDES--(1) A DIALOG IN PHONETIC TRANSCRIPTION AND ENGLISH TRANSLATION, (2)…
Adank, P.M.; Noordzij, M.L.; Hagoort, P.
A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and
This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…
Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.
Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony
Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.
Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene
Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.
Zou, Lijuan; Abutalebi, Jubin; Zinszer, Benjamin; Yan, Xin; Shu, Hua; Peng, Danling; Ding, Guosheng
The functional brain network of a bilingual's first language (L1) plays a crucial role in shaping that of his or her second language (L2). However, it is less clear how L2 acquisition changes the functional network of L1 processing in bilinguals. In this study, we demonstrate that in bimodal (Chinese spoken-sign) bilinguals, the functional network supporting L1 production (spoken language) has been reorganized to accommodate the network underlying L2 production (sign language). Using functional magnetic resonance imaging (fMRI) and a picture naming task, we find greater recruitment of the right supramarginal gyrus (RSMG), the right temporal gyrus (RSTG), and the right superior occipital gyrus (RSOG) for bilingual speakers versus monolingual speakers during L1 production. In addition, our second experiment reveals that these regions reflect either automatic activation of L2 (RSOG) or extra cognitive coordination (RSMG and RSTG) between both languages during L1 production. The functional connectivity between these regions, as well as between other regions that are L1- or L2-specific, is enhanced during L1 production in bimodal bilinguals as compared to their monolingual peers. These findings suggest that L1 production in bimodal bilinguals involves an interaction between L1 and L2, supporting the claim that learning a second language does, in fact, change the functional brain network of the first language. Copyright © 2012 Elsevier Inc. All rights reserved.
Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.
Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides
Geytenbeek, J.J.M.; Vermeulen, R.J.; Becher, J.G.; Oostrom, K.J.
Aim: To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Method: Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic
De Angelis, Gessica
The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…
Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from fiscal...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for Retirement and Survivor benefits from federal...
Tong, Vivien; Raynor, David K; Aslani, Parisa
To explore Australian and UK consumers' receipt and use of spoken and written medicine information and examine the role of leaflets for consumers of over-the-counter (OTC) medicines. Semistructured interviews were conducted with 37 Australian and 39 UK consumers to explore information received with their most recent OTC medicine purchase, and how information was used at different times post-purchase. Interviews were audio-recorded, transcribed verbatim and thematically analysed. Similarities were evident between the key themes identified from Australian and UK consumers' experiences. Consumers infrequently sought spoken information and reported that pharmacy staff provided minimal spoken information for OTC medicines. Leaflets were not always received or wanted and had a less salient role as an information source for repeat OTC purchases. Consumers tended not to read OTC labels or leaflets. Product familiarity led to consumers tending not to seek information on labels or leaflets. When labels were consulted, directions for use were commonly read. However, OTC medicine information in general was infrequently revisited. As familiarity is not an infallible proxy for safe and effective medication use, strategies to promote the value and use of these OTC medicine information sources are important and needed. Minimal spoken information provision coupled with limited written information use may adversely impact medication safety in self-management. © 2017 Royal Pharmaceutical Society.
Coplan, Robert J.; Weeks, Murray
The goal of this study was to examine the moderating role of pragmatic language in the relations between shyness and indices of socio-emotional adjustment in an unselected sample of early elementary school children. In particular, we sought to explore whether pragmatic language played a protective role for shy children. Participants were n = 167…
Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán
Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…
This paper describes a study comparing chatroom and face-to-face oral interaction for the purposes of language learning in a tertiary classroom in the United Arab Emirates. It uses transcripts analysed for Language Related Episodes, collaborative dialogues, thought to be externally observable examples of noticing in action. The analysis is…
Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…
Liebenthal, Einat; Silbersweig, David A; Stern, Emily
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
Allen, Mark D; Owens, Tyler E
Allen [Allen, M. D. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. Brain and Language, 95, 255-264] presents evidence from a single patient, WBN, to motivate a theory of lexical processing and representation in which syntactic information may be encoded and retrieved independently of semantic information. In his critique, Kemmerer argues that because Allen depended entirely on preposition-based verb subcategory violations to test WBN's knowledge of correct argument structure, his results, at best, address a "strawman" theory. This argument rests on the assumption that preposition subcategory options are superficial syntactic phenomena which are not represented by argument structure proper. We demonstrate that preposition subcategory is in fact treated as semantically determined argument structure in the theories that Allen evaluated, and thus far from irrelevant. In further discussion of grammatically relevant versus irrelevant semantic features, Kemmerer offers a review of his own studies. However, due to an important design shortcoming in these experiments, we remain unconvinced. Reemphasizing the fact the Allen (2005) never claimed to rule out all semantic contributions to syntax, we propose an improvement in Kemmerer's approach that might provide more satisfactory evidence on the distinction between the kinds of relevant versus irrelevant features his studies have addressed.
Harris, David; Bennet, Lisa; Bant, Sharyn
Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of
Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R
This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal years...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Aged benefits for fiscal years 2014 -...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for fiscal years 2014...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits for federal fiscal year...
Social Security Administration — This data set provides annual volumes for language preferences at the national level of individuals filing claims for ESRD Medicare benefits from federal fiscal year...
Vaynam, Margaret Judith
The following thesis investigates Russian mothers’ experience of motivating Russian language learning and bilingualism in their Russian-Norwegian children in Norway. The purpose of this study is to look at the motivating factors that influence parents to teach their children their own language, as well as support the bilingual situation. Although the study focuses on the minority language, as it is the language spoken by the mothers, the bilingual situation is used to further analysis on moti...
Sindorela Doli Kryeziu; Gentiana Muhaxhiri
In this paper we have tried to clarify the problems that are faced "gege dialect's'' speakers in Gjakova who have presented more or less difficulties in acquiring the standard. Standard language is part of the people language, but increased to the norm according the scientific criteria. From this observation it comes obliviously understandable that standard variation and dialectal variant are inseparable and, as such, they represent a macro linguistic unity. As part of this macro linguistic u...
Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen
To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.
Full Text Available Despite the abundance of electronic corpora now available to researchers, corpora of natural speech are still relatively rare and relatively costly. This paper suggests reasons why spoken corpora are needed, despite the formidable problems of construction. The multiple purposes of such corpora and the involvement of very different kinds of language communities in such projects mean that there is no one single blueprint for the design, markup, and distribution of spoken corpora. A number of different spoken corpora are reviewed to illustrate a range of possibilities for the construction of spoken corpora.
Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…
Koyalan, Aylin; Mumford, Simon
The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…
Blumenfeld, Henrike K.; Marian, Viorica
Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842
Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa
Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…
Toledo, Paloma; Eosakul, Stanley T; Grobman, William A; Feinglass, Joe; Hasnain-Wynia, Romana
Hispanic women are less likely than non-Hispanic Caucasian women to use neuraxial labor analgesia. It is unknown whether there is a disparity in anticipated or actual use of neuraxial labor analgesia among Hispanic women based on primary language (English versus Spanish). In this 3-year retrospective, single-institution, cross-sectional study, we extracted electronic medical record data on Hispanic nulliparous with vaginal deliveries who were insured by Medicaid. On admission, patients self-identified their primary language and anticipated analgesic use for labor. Extracted data included age, marital status, labor type, delivery provider (obstetrician or midwife), and anticipated and actual analgesic use. Household income was estimated from census data geocoded by zip code. Multivariable logistic regression models were estimated for anticipated and actual neuraxial analgesia use. Among 932 Hispanic women, 182 were self-identified as primary Spanish speakers. Spanish-speaking Hispanic women were less likely to anticipate and use neuraxial anesthesia than English-speaking women. After controlling for confounders, there was an association between primary language and anticipated neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women, 0.70; 97.5% confidence interval, 0.53-0.92). Similarly, there was an association between language and neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women 0.88; 97.5% confidence interval, 0.78-0.99). The use of a midwife compared with an obstetrician also decreased the likelihood of both anticipating and using neuraxial analgesia. A language-based disparity was found in neuraxial labor analgesia use. It is possible that there are communication barriers in knowledge or understanding of analgesic options. Further research is necessary to determine the cause of this association.
Werfel, Krystal L
The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.
Full Text Available The commercial successes of spoken dialog systems in the developed world provide encouragement for their use in the developing world, where speech could play a role in the dissemination of relevant information in local languages. We investigate...
Westerveld, Marleen F; Gillon, Gail T
This investigation explored the effects of oral narrative elicitation context on children's spoken language performance. Oral narratives were produced by a group of 11 children with reading disability (aged between 7;11 and 9;3) and an age-matched control group of 11 children with typical reading skills in three different contexts: story retelling, story generation, and personal narratives. In the story retelling condition, the children listened to a story on tape while looking at the pictures in a book, before being asked to retell the story without the pictures. In the story generation context, the children were shown a picture containing a scene and were asked to make up their own story. Personal narratives were elicited with the help of photos and short narrative prompts. The transcripts were analysed at microstructure level on measures of verbal productivity, semantic diversity, and morphosyntax. Consistent with previous research, the results revealed no significant interactions between group and context, indicating that the two groups of children responded to the type of elicitation context in a similar way. There was a significant group effect, however, with the typical readers showing better performance overall on measures of morphosyntax and semantic diversity. There was also a significant effect of elicitation context with both groups of children producing the longest, linguistically most dense language samples in the story retelling context. Finally, the most significant differences in group performance were observed in the story retelling condition, with the typical readers outperforming the poor readers on measures of verbal productivity, number of different words, and percent complex sentences. The results from this study confirm that oral narrative samples can distinguish between good and poor readers and that the story retelling condition may be a particularly useful context for identifying strengths and weaknesses in oral narrative performance.
Onnis, Luca; Thiessen, Erik
What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. PMID:23200510
Onnis, Luca; Thiessen, Erik
What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. Copyright © 2012 Elsevier B.V. All rights reserved.
Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M
A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.
Pitts, Casey E; Onishi, Kristine H; Vouloumanos, Athena
Adults recognize that people can understand more than one language. However, it is unclear whether infants assume other people understand one or multiple languages. We examined whether monolingual and bilingual 20-month-olds expect an unfamiliar person to understand one or more than one language. Two speakers told a listener the location of a hidden object using either the same or two different languages. When different languages were spoken, monolinguals looked longer when the listener searched correctly, bilinguals did not; when the same language was spoken, both groups looked longer for incorrect searches. Infants rely on their prior language experience when evaluating the language abilities of a novel individual. Monolingual infants assume others can understand only one language, although not necessarily the infants' own; bilinguals do not. Infants' assumptions about which community of conventions people belong to may allow them to recognize effective communicative partners and thus opportunities to acquire language, knowledge, and culture. Copyright © 2014 Elsevier B.V. All rights reserved.
Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena
The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…
Colin, C; Zuinen, T; Bayard, C; Leybaert, J
Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun
The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.
The service learning immersion experience in Central America benefitted preservice teachers, which resulted in a collaborative project on the analysis of languages spoken at the primary to middle school level. This study researches, collects data, and analyzes results from one school system in the country of Nicaragua in hopes of acquiring…
Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung
This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.
Full Text Available This paper frames a novel methodology for spoken document information retrieval to the spontaneous speech corpora and converting the retrieved document into the corresponding language text. The proposed work involves the three major areas namely spoken keyword detection, spoken document retrieval and automatic speech recognition. The keyword spotting is concerned with the exploit of the distribution capturing capability of the Auto Associative Neural Network (AANN for spoken keyword detection. It involves sliding a frame-based keyword template along the audio documents and by means of confidence score acquired from the normalized squared error of AANN to search for a match. This work benevolences a new spoken keyword spotting algorithm. Based on the match the spoken documents are retrieved and clustered together. In speech recognition step, the retrieved documents are converted into the corresponding language text using the AANN classifier. The experiments are conducted using the Dravidian language database and the results recommend that the proposed method is promising for retrieving the relevant documents of a spoken query as a key and transform it into the corresponding language.
Schreibman, Laura; Stahmer, Aubyn C
Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.
Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.
Full Text Available Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1 and second-language (L2. A total of 72 children (10 years old participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 sec crossed with two different SNRs (+3 dBA and +12 dBA. Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA improved recall compared to a low SNR (+3 dBA. Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time did not interact with language.
Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Full Text Available Recent studies suggest that deaf children perform more poorly on working memory tasks compared to hearing children, but do not say whether this poorer performance arises directly from deafness itself or from deaf children’s reduced language exposure. The issue remains unresolved because findings come from (1 tasks that are verbal as opposed to non-verbal, and (2 involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities. A more direct test of how the type and quality of language exposure impacts working memory is to use measures of non-verbal working memory (NVWM and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n=27, deaf native users of British Sign Language (BSL; n=7, and deaf children non native signers (n=19. We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and if language tasks predicted scores on NVWM tasks. For the two NVWM tasks, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another. Multiple regression analysis revealed that the vocabulary measure predicted scores on NVWM tasks. Our results suggest that whatever the language modality – spoken or signed – rich language experience from birth, and the good language skills that result from this early age of aacquisition, play a critical role in the development of NVWM and in performance on NVWM
Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.
Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert
Background: Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and…
Sedgwick, Carole; Garner, Mark
Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical
Schaefer, Blanca; Stackhouse, Joy; Wells, Bill
There is strong empirical evidence that English-speaking children with spoken language difficulties (SLD) often have phonological awareness (PA) deficits. The aim of this study was to explore longitudinally if this is also true of pre-school children speaking German, a language that makes extensive use of derivational morphemes which may impact on the acquisition of different PA levels. Thirty 4-year-old children with SLD were assessed on 11 PA subtests at three points over a 12-month period and compared with 97 four-year-old typically developing (TD) children. The TD-group had a mean percentage correct of over 50% for the majority of tasks (including phoneme tasks) and their PA skills developed significantly over time. In contrast, the SLD-group improved their PA performance over time on syllable and rhyme, but not on phoneme level tasks. Group comparisons revealed that children with SLD had weaker PA skills, particularly on phoneme level tasks. The study contributes a longitudinal perspective on PA development before school entry. In line with their English-speaking peers, German-speaking children with SLD showed poorer PA skills than TD peers, indicating that the relationship between SLD and PA is similar across these two related but different languages.
Aims: To investigate whether length of clinical experience influenced: number of bilingual children treated, languages spoken by these children, languages in which assessment and remediation can be offered, assessment instrument(s favoured, and languages in which therapy material is required. Method: From questionnaires completed by 243 Health Professions Council of South Africa (HPCSA-registered SLTs who treat children with language problems, two groups were drawn:71 more experienced (ME respondents (20+ years of experience and 79 less experienced (LE respondents (maximum 5 years of experience. Results: The groups did not differ significantly with regard to (1 number of children(monolingual or bilingual with language difficulties seen, (2 number of respondents seeing child clients who have Afrikaans or an African language as home language, (3 number of respondents who can offer intervention in Afrikaans or English and (4 number of respondents who reported needing therapy material in Afrikaans or English. However, significantly more ME than LE respondents reported seeing first language child speakers of English, whereas significantly more LE than ME respondents could provide services, and required therapymaterial, in African languages. Conclusion: More LE than ME SLTs could offer remediation in an African language, but there were few other significant differences between the two groups. There is still an absence of appropriate assessment and remediation material for Afrikaans and African languages, but the increased number of African language speakers entering the profession may contribute to better service delivery to the diverse South African population.
This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…
Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua
Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word
Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hoshino, Takahiro; Hagiwara, Hiroko
Children's foreign-language (FL) learning is a matter of much social as well as scientific debate. Previous behavioral research indicates that starting language learning late in life can lead to problems in phonological processing. Inadequate phonological capacity may impede lexical learning and semantic processing (phonological bottleneck hypothesis). Using both behavioral and neuroimaging data, here we examine the effects of age of first exposure (AOFE) and total hours of exposure (HOE) to English, on 350 Japanese primary-school children's semantic processing of spoken English. Children's English proficiency scores and N400 event-related brain potentials (ERPs) were analyzed in multiple regression analyses. The results showed (1) that later, rather than earlier, AOFE led to higher English proficiency and larger N400 amplitudes, when HOE was controlled for; and (2) that longer HOE led to higher English proficiency and larger N400 amplitudes, whether AOFE was controlled for or not. These data highlight the important role of amount of exposure in FL learning, and cast doubt on the view that starting FL learning earlier always produces better results. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Geers, Ann E; Sedey, Allison L
of sign to enhance language skills during the elementary years does not appear to have a negative impact on later language skills, students who continue to rely on sign to improve their vocabulary comprehension into high school typically exhibit poorer English language outcomes than students whose spoken language comprehension parallels or exceeds their comprehension of speech + sign. Overall, the language results obtained from these teenagers with more than 10 yrs of CI experience reflect substantial improvement over the verbal skills exhibited by adolescents with similar levels of hearing loss before the advent of CIs. These optimistic results were observed in teenagers who were among the first in the United States and Canada to receive a CI. We anticipate that the use of improved technology that is being initiated at even younger ages should lead to age-appropriate language levels in an even larger proportion of children with CIs.
Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits for fiscal...
Social Security Administration — This data set provides quarterly volumes for language preferences at the national level of individuals filing claims for SSI Blind and Disabled benefits from fiscal...
Alt, Mary; Gutmann, Michelle L
This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.
Marshall, Chloë; Jones, Anna; Denmark, Tanya; Mason, Kathryn; Atkinson, Joanna; Botting, Nicola; Morgan, Gary
measure predicted scores on those two executive-loaded NVWM tasks (with age and non-verbal reasoning partialled out). Our results suggest that whatever the language modality—spoken or signed—rich language experience from birth, and the good language skills that result from this early age of acquisition, play a critical role in the development of NVWM and in performance on NVWM tasks. PMID:25999875
Marshall, Chloë; Jones, Anna; Denmark, Tanya; Mason, Kathryn; Atkinson, Joanna; Botting, Nicola; Morgan, Gary
predicted scores on those two executive-loaded NVWM tasks (with age and non-verbal reasoning partialled out). Our results suggest that whatever the language modality-spoken or signed-rich language experience from birth, and the good language skills that result from this early age of acquisition, play a critical role in the development of NVWM and in performance on NVWM tasks.
Houston, K. Todd
Since 1946, Utah State University (USU) has offered specialized coursework in audiology and speech-language pathology, awarding the first graduate degrees in 1948. In 1965, the teacher training program in deaf education was launched. Over the years, the Department of Communicative Disorders and Deaf Education (COMD-DE) has developed a rich history…
Chen, Pei-Hua; Liu, Ting-Wei
Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…
Nelson, Sarah; McDuffie, Andrea; Banasik, Amy; Tempero Feigles, Robyn; Thurman, Angela John; Abbeduto, Leonard
This study examined the impact of a distance-delivered parent-implemented narrative language intervention on the use of inferential language during shared storytelling by school-aged boys with fragile X syndrome, an inherited neurodevelopmental disorder. Nineteen school-aged boys with FXS and their biological mothers participated. Dyads were randomly assigned to an intervention or a treatment-as-usual comparison group. Transcripts from all pre- and post-intervention sessions were coded for child use of prompted and spontaneous inferential language coded into various categories. Children in the intervention group used more utterances that contained inferential language than the comparison group at post-intervention. Furthermore, children in the intervention group used more prompted inferential language than the comparison group at post-intervention, but there were no differences between the groups in their spontaneous use of inferential language. Additionally, children in the intervention group demonstrated increases from pre- to post-intervention in their use of most categories of inferential language. This study provides initial support for the utility of a parent-implemented language intervention for increasing the use of inferential language by school aged boys with FXS, but also suggests the need for additional treatment to encourage spontaneous use. Copyright © 2018 Elsevier Inc. All rights reserved.
Weisberg, Jill; McCullough, Stephen; Emmorey, Karen
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161
Šimáčková, Š.; Podlipský, V.J.; Chládková, K.
As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,
Southwood, Frenette; Van Dulm, Ondene
South African speech-language therapists (SLTs) currently do not reflect the country's linguistic and cultural diversity. The question arises as to who might be better equipped currently to provide services to multilingual populations: SLTs with more clinical experience in such contexts, or recently trained SLTs who are themselves linguistically and culturally diverse and whose training programmes deliberately focused on multilingualism and multiculturalism? To investigate whether length of clinical experience influenced: number of bilingual children treated, languages spoken by these children, languages in which assessment and remediation can be offered, assessment instrument(s) favoured, and languages in which therapy material is required. From questionnaires completed by 243 Health Professions Council of South Africa (HPCSA)-registered SLTs who treat children with language problems, two groups were drawn:71 more experienced (ME) respondents (20+ years of experience) and 79 less experienced (LE) respondents (maximum 5 years of experience). The groups did not differ significantly with regard to (1) number of children(monolingual or bilingual) with language difficulties seen, (2) number of respondents seeing child clients who have Afrikaans or an African language as home language, (3) number of respondents who can offer intervention in Afrikaans or English and (4) number of respondents who reported needing therapy material in Afrikaans or English. However, significantly more ME than LE respondents reported seeing first language child speakers of English, whereas significantly more LE than ME respondents could provide services, and required therapy material, in African languages. More LE than ME SLTs could offer remediation in an African language, but there were few other significant differences between the two groups. There is still an absence of appropriate assessment and remediation material for Afrikaans and African languages, but the increased number of African
Southwood, Frenette; van Dulm, Ondene
Background South African speech-language therapists (SLTs) currently do not reflect the country's linguistic and cultural diversity. The question arises as to who might be better equipped currently to provide services to multilingual populations: SLTs with more clinical experience in such contexts, or recently trained SLTs who are themselves linguistically and culturally diverse and whose training programmes deliberately focused on multilingualism and multiculturalism? Aims To investigate whether length of clinical experience influenced: number of bilingual children treated, languages spoken by these children, languages in which assessment and remediation can be offered, assessment instrument(s) favoured, and languages in which therapy material is required. Method From questionnaires completed by 243 Health Professions Council of South Africa (HPCSA)-registered SLTs who treat children with language problems, two groups were drawn: 71 more experienced (ME) respondents (20+ years of experience) and 79 less experienced (LE) respondents (maximum 5 years of experience). Results The groups did not differ significantly with regard to (1) number of children (monolingual or bilingual) with language difficulties seen, (2) number of respondents seeing child clients who have Afrikaans or an African language as home language, (3) number of respondents who can offer intervention in Afrikaans or English and (4) number of respondents who reported needing therapy material in Afrikaans or English. However, significantly more ME than LE respondents reported seeing first language child speakers of English, whereas significantly more LE than ME respondents could provide services, and required therapy material, in African languages. Conclusion More LE than ME SLTs could offer remediation in an African language, but there were few other significant differences between the two groups. There is still an absence of appropriate assessment and remediation material for Afrikaans and African
Kim, Daesang; Ruecker, Daniel; Kim, Dong-Joong
The purpose of this study was to investigate the benefits of learning with mobile technology for TESOL students and to explore their perceptions of learning with this type of technology. The study provided valuable insights on how students perceive and adapt to learning with mobile technology for effective learning experiences for both students…
Edmonds, Caroline J.; Pring, Linda
The two experiments reported here investigated the ability of sighted children and children with visual impairment to comprehend text and, in particular, to draw inferences both while reading and while listening. Children were assigned into "comprehension skill" groups, depending on the degree to which their reading comprehension skill was in line…
Kimppa, Lilli; Kujala, Teija; Shtyrov, Yury
Mastering multiple languages is an increasingly important ability in the modern world; furthermore, multilingualism may affect human learning abilities. Here, we test how the brain’s capacity to rapidly form new representations for spoken words is affected by prior individual experience in non-native language acquisition. Formation of new word memory traces is reflected in a neurophysiological response increase during a short exposure to novel lexicon. Therefore, we recorded changes in electrophysiological responses to phonologically native and non-native novel word-forms during a perceptual learning session, in which novel stimuli were repetitively presented to healthy adults in either ignore or attend conditions. We found that larger number of previously acquired languages and earlier average age of acquisition (AoA) predicted greater response increase to novel non-native word-forms. This suggests that early and extensive language experience is associated with greater neural flexibility for acquiring novel words with unfamiliar phonology. Conversely, later AoA was associated with a stronger response increase for phonologically native novel word-forms, indicating better tuning of neural linguistic circuits to native phonology. The results suggest that individual language experience has a strong effect on the neural mechanisms of word learning, and that it interacts with the phonological familiarity of the novel lexicon. PMID:27444206
Describes how storytelling can enhance both literal and inferential comprehension, motivate oral discussion, increase perceptual knowledge of metaphor, explain and promote interesting language usage, instill deeper meaning to children's personal experiences, and excite children about literature, storytelling, and creative interpretations of story.…
LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI
Süleyman Kaan YALÇIN (M.A.H.
Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.
Smith, Ann Marie
This case study explores seventh grade students' experiences with writing and performing poetry. Teacher and student interviews along with class observations provide insight into how the teacher and students viewed spoken word poetry and identity. The researcher recommends practices for the teaching of critical literacy using spoken word and…
Dobson, Simon; Farragher, Linda
non-peer-reviewed Most modern programming languages are complex and feature rich. Whilst this is (sometimes) an advantage for industrial-strength applications, it complicates both language teaching and language research. We describe our experiences in the design of a reduced sub-set of the Java language and its implementation using the Vanilla language development framework. We argue that Vanilla???s component-based approach allows the language???s feature set to be varied quickly and simp...
Allen, David; Cloyes, Kristin
This paper is an analysis of how the signifier 'experience' is used in nursing research. We identify a set of issues we believe accompany the use of experience but are rarely addressed. These issues are embedded in a spectrum that includes ontological commitments, visions of the person/self and its relation to 'society', understandings of research methodology and the politics of nursing. We argue that a poststructuralist understanding of the language of experience in research opens up additional ways to analyze the relationship between the conduct of nursing research and cultural/political commitments.
Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua
Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…
The comparatively small vowel inventory of Bantu languages leads young Bantu learners to produce "undifferentiations," so that, for example, the spoken forms of "hat,""hut,""heart" and "hurt" sound the same to a British ear. The two criteria for a non-native speaker's spoken performance are…
This chapter gives an overview of different experiments that have been performed to demonstrate how a symbolic communication system, including its underlying ontology, can arise in situated embodied interactions between autonomous agents. It gives some details of the Grounded Naming Game, which focuses on the formation of a system of proper names, the Spatial Language Game, which focuses on the formation of a lexicon for expressing spatial relations as well as perspective reversal, and an Event Description Game, which concerns the expression of the role of participants in events through an emergent case grammar. For each experiment, details are provided how the symbolic system emerges, how the interaction is grounded in the world through the embodiment of the agent and its sensori-motor processing, and how concepts are formed in tight interaction with the emerging language.
Brown, C.M.; Berkum, J.J.A. van; Hagoort, P.
A study is presented on the effects of discourse-semantic and lexical-syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior
This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. · Introduces number systems and signal transmission methods · Reviews logic gates, registers, multiplexers, decoders and memory · Provides an overview and examples of ARM instruction set · Uses using Keil development tools for writing and debugging ARM assembly language Programs · Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...
Lillian eMay; Krista eByers-Heinlein; Judit eGervain; Janet F Werker
Previous research has shown that by the time of birth, the neonate brain responds specially to the native language when compared to acoustically similar non-language stimuli. In the current study, we use Near Infrared Spectroscopy to ask how prenatal language experience might shape the brain response to language in newborn infants. To do so, we examine the neural response of neonates when listening to familiar versus unfamiliar language, as well as to non-linguistic backwards language. Twenty...
This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…
Full Text Available There are numerous researches on color terms and names in many languages. In Mongolian language there are few doctoral theses on color naming. Cross cultural studies of color naming have demonstrated Semantic relevance in French and Mongolian color name Gerlee Sh. (2000; Comparisons of color naming across English and Mongolian Uranchimeg B. (2004; Semantic comparison between Russian and Mongolian idioms Enhdelger O. (1996; across symbolism Dulam S. (2007 and few others. Also a few articles on color naming by some Mongolian scholars are Tsevel, Ya. (1947, Baldan, L. (1979, Bazarragchaa, M. (1997 and others. Color naming studies are not sufficiently studied in Modern Mongolian. Our research is considered to be the first intended research on color naming in Modern Mongolian, because it is one part of Ph.D dissertation on color naming. There are two color naming categories in Mongolian, basic color terms and non- basic color terms. There are seven basic color terms in Mongolian. This paper aims to consider how Mongolian color names are derived from basic colors by using psycholinguistics associative experiment. It maintains the students and researchers to acquire the specific understanding of the differences and similarities of color naming in Mongolian and English languages from the psycho-linguistic aspect.
Thompson, Amy S.; Lee, Junkyu
This study aims to investigate the effect of experience abroad and second language proficiency on foreign language classroom anxiety. Particularly, this study is an attempt to fill the gap in the literature about the affective outcomes after experiences abroad through the anxiety profiles of Korean learners of English as a foreign language (EFL)…
Qu, Qingqing; Damian, Markus F
Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.
Antovich, Dylan M; Graf Estes, Katharine
Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable-level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14-month-olds' abilities to segment two artificial languages using transitional probability cues. In Experiment 1, monolingual infants successfully segmented the speech streams when the languages were presented individually. However, monolinguals did not segment the same language stimuli when they were presented together in interleaved segments, mimicking the language switches inherent to bilingual speech. To assess the effect of real-world bilingual experience on dual language speech segmentation, Experiment 2 tested infants with regular exposure to two languages using the same interleaved language stimuli as Experiment 1. The bilingual infants in Experiment 2 successfully segmented the languages, indicating that early exposure to two languages supports infants' abilities to segment dual language speech using transitional probability cues. These findings support the notion that early bilingual exposure prepares infants to navigate challenging aspects of dual language environments as they begin to acquire two languages. © 2017 John Wiley & Sons Ltd.
Time-compressed spoken words enhance driving performance in complex visual scenarios : evidence of crossmodal semantic priming effects in basic cognitive experiments and applied driving simulator studies
Would speech warnings be a good option to inform drivers about time-critical traffic situations? Even though spoken words take time until they can be understood, listening is well trained from the earliest age and happens quite automatically. Therefore, it is conceivable that spoken words could immediately preactivate semantically identical (but physically diverse) visual information, and thereby enhance respective processing. Interestingly, this implies a crossmodal semantic effect of audito...
Liu, Y.; Wang, M.; Perfetti, C.A.; Brubaker, B.; Wu, S.M.; MacWhinney, B.
Learning the Chinese tone system is a major challenge to students of Chinese as a second or foreign language. Part of the problem is that the spoken Chinese syllable presents a complex perceptual input that overlaps tone with segments. This complexity can be addressed through directing attention to
Kristmanson, Paula; Lafargue, Chantal; Culligan, Karla
This article focuses on the experiences of Grade 12 students using a language portfolio based on the principles and guidelines of the European Language Portfolio (ELP) in their second language classes in a large urban high school. As part of a larger action-research project, focus group interviews were conducted to gather data related to…
This is an account of how one class of English language learners compared and contrasted their language learning experiences with English language teaching (ELT) research findings during a five-week Intensive Academic Preparation course at an Australian university. It takes as its starting point the fact that learners, unlike teachers and…
Lipski, John M.
The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)
Aguilera, Dorothy; LeCompte, Margaret D.
This article examines the experiences of three Indigenous communities with language immersion models in preschool through 12th grades to revitalize and preserve their native languages through ethnographic research design and methods. The history and implementation of language instruction in three Indigenous communities are summarized. The analysis…
Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.
Full Text Available Foreign language anxiety (FLA has long been recognized as a factor that hinders the process of foreign language learning at all levels. Among numerous FLA sources identified in the literature, language classroom seems to be of particular interest and significance, especially in the formal language learning context, where the course and the teacher are often the only representatives of language. The main purpose of the study is to determine the presence and potential sources of foreign language anxiety among first year university students and to explore how high anxiety levels shape and affect students’ foreign language learning experience. In the study both the questionnaire and the interviews were used as the data collection methods. Thematic analysis of the interviews and descriptive statistics suggest that most anxiety-provoking situations stem from the language classroom itself.
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
Individuals with language barriers may face challenges unique to a host society. By examining and comparing the sociocultural conditions that can result in providers and patients not sharing the same language in the United States and in Taiwan, I argue that (a) language discordance is a social phenomenon that may entail diverging meanings and experiences in different countries; (b) language-discordant patients may not share similar experiences even if they are in the same country; and (c) disparities in language concordance may be confounded with other disparities and cultural particulars that are unique to a host society. In addition, because English is a dominant language in medicine, language-discordant patients' quality of care in Taiwan can be moderated by their fluency in English.
Mishra, Ramesh Kumar; Singh, Niharika
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
Goldman, Jerry; Renals, Steve; Bird, Steven; de Jong, Franciska; Federico, Marcello; Fleischhauer, Carl; Kornbluh, Mark; Lamel, Lori; Oard, Douglas W; Stewart, Claire; Wright, Richard
Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs. This paper outlines the major issues in the field, reviews the current state of technology, examines the rapidly changing policy issues relating to privacy and copyright, and presents issues relati...
Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella
Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of…
Flores, Glenn; Tomany-Korman, Sandra C
Fifty-five million Americans speak a non-English primary language at home, but little is known about health disparities for children in non-English-primary-language households. Our study objective was to examine whether disparities in medical and dental health, access to care, and use of services exist for children in non-English-primary-language households. The National Survey of Childhood Health was a telephone survey in 2003-2004 of a nationwide sample of parents of 102 353 children 0 to 17 years old. Disparities in medical and oral health and health care were examined for children in a non-English-primary-language household compared with children in English- primary-language households, both in bivariate analyses and in multivariable analyses that adjusted for 8 covariates (child's age, race/ethnicity, and medical or dental insurance coverage, caregiver's highest educational attainment and employment status, number of children and adults in the household, and poverty status). Children in non-English-primary-language households were significantly more likely than children in English-primary-language households to be poor (42% vs 13%) and Latino or Asian/Pacific Islander. Significantly higher proportions of children in non-English-primary-language households were not in excellent/very good health (43% vs 12%), were overweight/at risk for overweight (48% vs 39%), had teeth in fair/poor condition (27% vs 7%), and were uninsured (27% vs 6%), sporadically insured (20% vs 10%), and lacked dental insurance (39% vs 20%). Children in non-English-primary-language households more often had no usual source of medical care (38% vs 13%), made no medical (27% vs 12%) or preventive dental (14% vs 6%) visits in the previous year, and had problems obtaining specialty care (40% vs 23%). Latino and Asian children in non-English-primary-language households had several unique disparities compared with white children in non-English-primary-language households. Almost all disparities
Ricketts, Jessie; Dockrell, Julie E; Patel, Nita; Charman, Tony; Lindsay, Geoff
This experiment investigated whether children with specific language impairment (SLI), children with autism spectrum disorders (ASD), and typically developing children benefit from the incidental presence of orthography when learning new oral vocabulary items. Children with SLI, children with ASD, and typically developing children (n=27 per group) between 8 and 13 years of age were matched in triplets for age and nonverbal reasoning. Participants were taught 12 mappings between novel phonological strings and referents; half of these mappings were trained with orthography present and half were trained with orthography absent. Groups did not differ on the ability to learn new oral vocabulary, although there was some indication that children with ASD were slower than controls to identify newly learned items. During training, the ASD, SLI, and typically developing groups benefited from orthography to the same extent. In supplementary analyses, children with SLI were matched in pairs to an additional control group of younger typically developing children for nonword reading. Compared with younger controls, children with SLI showed equivalent oral vocabulary acquisition and benefit from orthography during training. Our findings are consistent with current theoretical accounts of how lexical entries are acquired and replicate previous studies that have shown orthographic facilitation for vocabulary acquisition in typically developing children and children with ASD. We demonstrate this effect in SLI for the first time. The study provides evidence that the presence of orthographic cues can support oral vocabulary acquisition, motivating intervention approaches (as well as standard classroom teaching) that emphasize the orthographic form. Copyright © 2015 Elsevier Inc. All rights reserved.
This paper investigates the online presence of Low German, a minority language spoken in northern Germany, as well as several other European regional and minority languages. In particular, this article presents the results of two experiments, one involving "Wikipedia" and one involving "Twitter," that assess whether and to…
Maizatulliza, M.; Kiely, R.
In the field of English language teaching and learning, there is a long history of investigating students' performance while they are undergoing specific learning programmes. This research study, however, focused on students' evaluation of their English language learning experience after they have completed their programme. The data were gathered…
Analyzes the Japanese language learning experiences of 13 hotel employees in Guam. Results of the study present implications and suggestions for a Japanese language program for the hotel industry. The project began as a result of hotel employees frustrations when they were unable to communicate effectively with their Japanese guests. (Auth/JL)
This study investigates the self-reported experiences of students participating in a Galician language and culture course. Galician, a language historically spoken in northwestern Spain, has been losing ground with respect to Spanish, particularly in urban areas and among the younger generations. The research specifically focuses on informal…
Hall, Matthew L.; Ferreira, Victor S.; Mayberry, Rachel I.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic primin...
Pfau, R.; Steinbach, M.; Woll, B.
Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of
Mariotti, Cristina; Caimi, Annamaria
The articles collected in this publication combine diachronic and synchronic research with the description of updated teaching experiences showing the educational role of subtitled audiovisuals in various foreign language learning settings.
Foreign language learning experience, foreign language learning motivation and European multilingualism : an Irish approach with reference to findings in the Netherlands and the United Kingdom / Fionnuala Kennedy ; Konrad Schröder. - In: Fremdsprachen im europäischen Haus / hrsg. von Konrad Schröder. - Frankfurt am Main : Diesterweg, 1992. - S. 434-452. - (Die neueren Sprachen ; 91/4-5)
Wilkinson, R; Hegner, B; Jones, C D
Being a highly dynamic language and allowing reliable programming with quick turnarounds, Python is a widely used programming language in CMS. Most of the tools used in workflow management and the GRID interface tools are written in this language. Also most of the tools used in the context of release management: integration builds, release building and deploying, as well as performance measurements are in Python. With an interface to the CMS data formats, rapid prototyping of analyses and debugging is an additional use case. Finally in 2008 the CMS experiment switched to using Python as its configuration language. This paper will give an overview of the general usage of Python in the CMS experiment and discuss which features of the language make it well-suited for the existing use cases.
Teachers’ practical knowledge is considered as teachers’ general knowledge, beliefs and thinking (Borg, 2003) which can be traced in teachers’ practices (Connelly & Clandinin, 1988) and shaped by various background sources (Borg, 2003; Grossman, 1990; Meijer, Verloop, and Beijard, 1999). This paper initially discusses how language teachers are influenced by three background sources: teachers’ prior language learning experiences, prior teaching experience, and professional coursework in pr...
Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)
Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na
The Rio Grande Valley (RGV) is a region populated by Spanish-speaking immigrants and their descendants producing a large English Language Learner (ELL) student population. ELLs have historically had low literacy rates and achievement levels when compared to their counterparts. In order to address this achievement gap, previous research efforts and curriculum interventions have focused on language acquisition as being the determining factor in ELL education, with little attention given to academic content acquisition. More current research efforts have transitioned into English language acquisition through academic content instruction; this present research study specifically focuses on ELL experiences in chemistry. Participants were high school chemistry students who identified as ELL or had recently exited out of ELL status. Students were interviewed to identify factors that attributed to their experiences in chemistry. Findings indicate code-switching as a key to learning chemistry in English but also the deterrent in English language acquisition.
Breach, H. T.
Describes an experiment with a second-year class of about 35 pupils in teaching English, French and Indonesian in an interconnected way, involving art and social studies as well as language and literature. Although the experiment is as yet unevaluated, the general effect was considered encouraging. (KM)
Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan
How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349
Dagsvold, Inger; Møllersen, Snefrid; Stordahl, Vigdis
The Indigenous population in Norway, the Sami, have a statutory right to speak and be spoken to in the Sami language when receiving health services. There is, however, limited knowledge about how clinicians deal with this in clinical practice. This study explores how clinicians deal with language-appropriate care with Sami-speaking patients in specialist mental health services. This study aims to explore how clinicians identify and respond to Sami patients' language data, as well as how they experience provision of therapy to Sami-speaking patients in outpatient mental health clinics in Sami language administrative districts. Data were collected using qualitative method, through individual interviews with 20 therapists working in outpatient mental health clinics serving Sami populations in northern Norway. A thematic analysis inspired by systematic text reduction was employed. Two themes were identified: (a) identification of Sami patients' language data and (b) experiences with provision of therapy to Sami-speaking patients. Findings indicate that clinicians are not aware of patients' language needs prior to admission and that they deal with identification of language data and offer of language-appropriate care ad hoc when patients arrive. Sami-speaking participants reported always offering language choice and found more profound understanding of patients' experiences when Sami language was used. Whatever language Sami-speaking patients may choose, they are found to switch between languages during therapy. Most non-Sami-speaking participants reported offering Sami-speaking services, but the patients chose to speak Norwegian. However, a few of the participants maintained language awareness and could identify language needs despite a patient's refusal to speak Sami in therapy. Finally, some non-Sami-speaking participants were satisfied if they understood what the patients were saying. They left it to patients to address language problems, only to discover patients
Dagsvold, Inger; Møllersen, Snefrid; Stordahl, Vigdis
Background The Indigenous population in Norway, the Sami, have a statutory right to speak and be spoken to in the Sami language when receiving health services. There is, however, limited knowledge about how clinicians deal with this in clinical practice. This study explores how clinicians deal with language-appropriate care with Sami-speaking patients in specialist mental health services. Objectives This study aims to explore how clinicians identify and respond to Sami patients’ language data, as well as how they experience provision of therapy to Sami-speaking patients in outpatient mental health clinics in Sami language administrative districts. Method Data were collected using qualitative method, through individual interviews with 20 therapists working in outpatient mental health clinics serving Sami populations in northern Norway. A thematic analysis inspired by systematic text reduction was employed. Findings Two themes were identified: (a) identification of Sami patients’ language data and (b) experiences with provision of therapy to Sami-speaking patients. Conclusion Findings indicate that clinicians are not aware of patients’ language needs prior to admission and that they deal with identification of language data and offer of language-appropriate care ad hoc when patients arrive. Sami-speaking participants reported always offering language choice and found more profound understanding of patients’ experiences when Sami language was used. Whatever language Sami-speaking patients may choose, they are found to switch between languages during therapy. Most non-Sami-speaking participants reported offering Sami-speaking services, but the patients chose to speak Norwegian. However, a few of the participants maintained language awareness and could identify language needs despite a patient's refusal to speak Sami in therapy. Finally, some non-Sami-speaking participants were satisfied if they understood what the patients were saying. They left it to patients
Full Text Available Background: The Indigenous population in Norway, the Sami, have a statutory right to speak and be spoken to in the Sami language when receiving health services. There is, however, limited knowledge about how clinicians deal with this in clinical practice. This study explores how clinicians deal with language-appropriate care with Sami-speaking patients in specialist mental health services. Objectives: This study aims to explore how clinicians identify and respond to Sami patients’ language data, as well as how they experience provision of therapy to Sami-speaking patients in outpatient mental health clinics in Sami language administrative districts. Method: Data were collected using qualitative method, through individual interviews with 20 therapists working in outpatient mental health clinics serving Sami populations in northern Norway. A thematic analysis inspired by systematic text reduction was employed. Findings: Two themes were identified: (a identification of Sami patients’ language data and (b experiences with provision of therapy to Sami-speaking patients. Conclusion: Findings indicate that clinicians are not aware of patients’ language needs prior to admission and that they deal with identification of language data and offer of language-appropriate care ad hoc when patients arrive. Sami-speaking participants reported always offering language choice and found more profound understanding of patients’ experiences when Sami language was used. Whatever language Sami-speaking patients may choose, they are found to switch between languages during therapy. Most non-Sami-speaking participants reported offering Sami-speaking services, but the patients chose to speak Norwegian. However, a few of the participants maintained language awareness and could identify language needs despite a patient's refusal to speak Sami in therapy. Finally, some non-Sami-speaking participants were satisfied if they understood what the patients were saying
In an increasingly competitive environment, with reduced government funding, full fee-paying international students are an important source of revenue for higher education institutions (HEIs). Although many previous studies have focused on the role of English language proficiency on academic success, there is little known about the extent to which levels of English language proficiency affect these non-native English speaking students’ overall course experience. There have been a wealth of st...
Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E
The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.
Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.
Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,
Kobayashi, Yuichiro; Abe, Mariko
The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…
ter Maat, Mark; Heylen, Dirk K.J.; Vilhjálmsson, Hannes; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn
This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.
English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…
In spite of the vast numbers of articles devoted to vocabulary acquisition in a foreign language, few studies address the contribution of lexical knowledge to spoken fluency. The present article begins with basic definitions of the temporal characteristics of oral fluency, summarizing L1 research over several decades, and then presents fluency…
This paper sets out to examine the phonological interference in the spoken English performance of the Izon speaker. It emphasizes that the level of interference is not just as a result of the systemic differences that exist between both language systems (Izon and English) but also as a result of the interlanguage factors such ...
Corpus-based grammars, notably "Cambridge Grammar of English," give explicit information on the forms and use of native-speaker grammar, including spoken grammar. Native-speaker norms as a necessary goal in language teaching are contested by supporters of English as a Lingua Franca (ELF); however, this article argues for the inclusion of selected…
de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.
Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and
Moran, Catherine; Kirk, Cecilia; Powell, Emma
Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…
Hua, Zhu; Wei, Li
Transnational and multilingual families have become commonplace in the twenty-first century. Yet relatively few attempts have been made from applied and socio-linguistic perspectives to understand what is going on "within" such families; how their transnational and multilingual experiences impact on the family dynamics and their everyday…
van Schendel, W.; Guhathakurta, M.; van Schendel, W.
The partition of 1947 created two new independent states, India and Pakistan. The eastern part of Bengal joined Pakistan. Pakistan was a highly ambitious experiment in twentieth-century state making. And yet, from the beginning the state was beset with enormous challenges. This excerpt from a recent
Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...
Zhu, H.; Li, W.
Transnational and multilingual families have become commonplace in the twenty-first century. Yet relatively few attempts have been made from applied and socio-linguistic perspectives to understand what is going on within such families; how their transnational and multilingual experiences impact on the family dynamics and their everyday life; how they cope with the new and ever-changing environment, and how they construct their identities and build social relations. In this article, we start f...
Full Text Available In this essay I argue that the idea of inhabiting, and of human individuality as the house of being, are fruitful ideas if located in a space defined by movement, porosity, interstitiality, and in an urban and architectural paradigm which is based on openness and inclusiveness. Transnational experiences and localities can be, to this end, extremely instructive. It is essential to articulate the notion of dwelling within an urban context in which building is the result of complex cultural and social interactions, which are characterised not only by the negotiation of space and materials but also, and more importantly, by a range of symbolic values. The symbolism that I refer to here is the product of mnemonic and emotional experiences marked by time and space, which in the case of the migratory and transnational experiences is arrived at through a delicate negotiation of the past and the present, and the ‘here’ (the current locality and the ‘there’ (the native locality. The dwelling that I speak of is, therefore, a double dwelling divided between the present at-hand and the remembered past, and as such it inhabits a space, which is both interstitial and liminal, simultaneously in and out-of-place. I have chosen the Italian Forum in Sydney as a working sample of the place-out-of-place
Van Engen, Kristin J; Chandrasekaran, Bharath; Smiljanic, Rajka
Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.
Chacon, Thiago Costa
This dissertation offers a detailed account of the phonology, morphophonology and elements of the morphosyntax of Kubeo, a language from the Eastern Tukanoan family, spoken in the Northwest Amazon. The dissertation is itself an experiment of how language documentation and empowering of the native speaker community can be combined with academic…
Wong, Ka F.; Xiao, Yang
The goal of this study is to explore the identity constructions of Chinese heritage language students from dialect backgrounds. Their experiences in learning Mandarin as a "heritage" language--even though it is spoken neither at home nor in their immediate communities--highlight how identities are produced, processed, and practiced in our…
Hu, Chieh-Fang; Schuele, C. Melanie
Although language experience is a key factor in successful foreign language (FL) learning, many FL learners fail to achieve performance levels that were predicted on the basis of their FL experience. This retrospective study investigated early cognitive and linguistic correlates of learning English as a foreign language (FL) in a group of…
Song, Lulu; Tamis-Lemonda, Catherine S; Yoshikawa, Hirokazu; Kahana-Kalman, Ronit; Wu, Irene
We longitudinally investigated parental language context and infants' language experiences in relation to Dominican American and Mexican American infants' vocabularies. Mothers provided information on parental language context, comprising measures of parents' language background (i.e., childhood language) and current language use during interviews at infants' birth. Infants' language experiences were measured at ages 14 months and 2 years through mothers' reports of mothers' and fathers' engagement in English and Spanish literacy activities with infants and mothers' English and Spanish utterances during videotaped mother-infant interactions. Infants' vocabulary development at 14 months and 2 years was examined using standardized vocabulary checklists in English and Spanish. Both parental language context and infants' language experiences predicted infants' vocabularies in each language at both ages. Furthermore, language experiences mediated associations between parental language context and infants' vocabularies. However, the specific mediation mechanisms varied by language.
resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within
Given the nature of spoken text, the first requirement of an appropriate grammar is its ability to account for stretches of language (including recurring types of text or genres), in addition to clause level patterns. Second, the grammatical model needs to be part of a wider theory of language that recognises the functional nature and educational purposes of spoken text. The model also needs to be designed in a\\ud sufficiently comprehensive way so as to account for grammatical forms in speech...
This article deals with the essence of religion proposed by Schleiermacher, namely 'the feeling of absolute dependence upon the Infinite'. In his theory of religious experience, and the language he used to express it, he claimed his work to be independent of concepts and beliefs. Epistemologically this is incompatible.
Shin, Sarah J.
Investigated the language experience of second-generation immigrant Korean American school-age children (4-18 years) by surveying their parents. Reports responses to a small portion of the questionnaire that specifically addressed the issue of birth order. (Author/VWL)
and automation to BPM tools through a tool experiment in Danske Bank, a large financial institute; We develop business process modeling languages, tools and transformations that capture Danske Banks specific modeling concepts and use of technology, and which automate the generation of code. An empirical...... evaluation shows that Danske Bank will possibly gain remarkable improvements in development productivity and the quality of the implemented code.This leads us to the conclusion that BPM tools should provide flexibility to allow customization of languages, tools and transformations to the specific needs...
This qualitative study explored the use of doodling to surface experiences in the psychological phenomenon of language anxiety in an English classroom. It treated the doodles of 192 freshmen from a premier university in Northern Luzon, Philippines. Further, it made use of phenomenological reduction in analysing the data gathered. Findings reveal…
This paper aims to gain insight into (Spanish) tourists' multilingual experiences by analyzing spontaneously written online travel diaries. Using the conceptual framework of Rapport Management Theory (RMT; Spencer-Oatey 2008), I analyze reports on the tourists' mother tongue, local languages, and English as lingua franca in order to examine the…
S. A. Shershakov
Full Text Available Process mining is a new direction in the field of modeling and analysis of processes, where the use of information from event logs describing the history of the system behavior plays an important role. Methods and approaches used in the process mining are often based on various heuristics, and experiments with large event logs are crucial for the study and comparison of the developed methods and algorithms. Such experiments are very time consuming, so automation of experiments is an important task in the field of process mining. This paper presents the language DPMine developed specifically to describe and carry out experiments on the discovery and analysis of process models. The basic concepts of the DPMine language as well as principles and mechanisms of its extension are described. Ways of integration of the DPMine language as dynamically loaded components into the VTMine modeling tool are considered. An illustrating example of an experiment for building a fuzzy model of the process discovered from the log data stored in a normalized database is given.
Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting
Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…
Rivera Maulucci, Maria S.
One of the central challenges globalization and immigration present to education is how to construct school language policies, procedures, and curricula to support academic success of immigrant youth. This case-study compares and contrasts language experience narratives along Elena's developmental trajectory of becoming an urban science teacher. Elena reflects upon her early language experiences and her more recent experiences as a preservice science teacher in elementary dual language classrooms. The findings from Elena's early schooling experiences provide an analysis of the linkages between Elena's developing English proficiency, her Spanish proficiency, and her autobiographical reasoning. Elena's experiences as a preservice teacher in two elementary dual language classrooms indicates ways in which those experiences helped to reframe her views about the intersections between language learning and science learning. I propose the language experience narrative, as a subset of the life story, as a way to understand how preservice teachers reconstruct past language experiences, connect to the present, and anticipate future language practices.
la Cour, Peter; Schultz, Rikke; Smith, Anne Agerskov
The Injustice Experience Questionnaire has shown promising ability to predict problematic rehabilitation in pain conditions, especially concerning work status. A Danish language version of the Injustice Experience Questionnaire was developed and completed by 358 patients with long-lasting pain....../somatoform symptoms. These patients also completed questionnaires concerning sociodemographics, anxiety and depression, subjective well-being, and overall physical and mental functioning. Our results showed satisfactory interpretability and face validity, and high internal consistency (Cronbach's alpha = .90......). The original one-factor structure was confirmed, but subscales should be interpreted cautiously. The Danish version of the Injustice Experience Questionnaire is found to be valid and reliable....
la Cour, Peter; Smith, Anne Agerskov; Schultz, Rikke
The Injustice Experience Questionnaire has shown promising ability to predict problematic rehabilitation in pain conditions, especially concerning work status. A Danish language version of the Injustice Experience Questionnaire was developed and completed by 358 patients with long-lasting pain/somatoform symptoms. These patients also completed questionnaires concerning sociodemographics, anxiety and depression, subjective well-being, and overall physical and mental functioning. Our results showed satisfactory interpretability and face validity, and high internal consistency (Cronbach's alpha = .90). The original one-factor structure was confirmed, but subscales should be interpreted cautiously. The Danish version of the Injustice Experience Questionnaire is found to be valid and reliable.
Shuai, Lan; Malins, Jeffrey G
Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.
Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery
Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.
Flores, Annette; Smith, K. Christopher
This article reports on the experiences of Spanish-speaking English language learners in high school chemistry courses, focusing largely on experiences in learning the English language, experiences learning chemistry, and experiences learning chemistry in the English language. The findings illustrate the cognitive processes the students undertake…
van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…
Brimo, Danielle; Lund, Emily; Sapp, Alysha
Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below
Full Text Available The article defines the role of the European experience in the foreign language teachers` training in the modern society, the use of International relations in education. The concept of common European education is analyzed. Due to this concept teaching and learning standards, educational models, and teaching objectives are brought together with the aim to create the common all-European educational system. In order to join this all-European scheme Ukraine needs to make modifications in its educational system. The fundamental idea is to use blended learning as the dominant instructional mode in higher education. The authors examine how the study of the leading European powers` educational experience helps to approach the problems of education in Ukraine critically. English Language Department of Mykolaiv V. Sukhomlynsky National University as a part of the consortium, composed of ten higher education institutions, takes part in the TEMPUS-project «Improving teaching European languages through the introduction of on-line technology (blended learning to train teachers." Blended learning is a powerful technology to be implemented into the modern model of Ukrainian education in order to get the level of European educational system. The article highlights how participation in the implementation of TEMPUS-project can be an effective tool for improving the training of the foreign languages teachers.
Dressler, Roswita; Dressler, Anja
Teens who post on the popular social networking site Facebook in their home environment often continue to do so on second language study abroad sojourns. These sojourners use Facebook to document and make sense of their experiences in the host culture and position themselves with respect to language(s) and culture(s). This study examined one…
In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and accurate use. Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...
Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford
We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.
Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou
This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…
Full Text Available Language learners’ attitudes towards the language and its speakers greatly influence the language learning process and the learning outcomes. Previous research and studies on attitudes and motivation in language learning (Csizér 2007, Dörnyei 2009 show that attitudes and motivation are strongly intertwined. Positive attitude towards the language and its speakers can lead to increased motivation, which then results in better learning achievement and a positive attitude towards learning the language. The aim of the present study was to get a better insight into what regards the language attitudes of students attending Hungarian minority schools in Romania. The interest of the study lies in students’ attitudes towards the different languages, the factors/criteria along which they express their language attitudes, students’ learning experiences and strategies that they consider efficient and useful in order to acquire a language. Results suggest that students’ attitudes are determined by their own experiences of language use, and in this sense we can differentiate between a language for identification – built upon specific emotional, affective, and cognitive factors – and language for communication.
Potter, Christine E; Wang, Tianlin; Saffran, Jenny R
Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.
I Nengah Sudipa
Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.
Chen, Wei; Mostow, Jack; Aist, Gregory
Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…
Roč. 68, č. 2 (2017), s. 305-315 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : correlative conjunctions * spoken Czech * cohesion Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf
Tati Sri Uswati
Full Text Available If a person has good speech skills, he or she will gain both social and professional benefits. In the implementation of Indonesian language learning in schools, teachers do not invite students to be more active in listening, speaking, reading and writing. This condition results in low student speaking ability. This study aims to improve speaking skills on the competence of retelling the contents of the short story by applying the Language Experience Approach (LEA. This class action research (PTK is conducted in MAN 2 Kota Cirebon. Data collection techniques, including: questionnaires, observations, interviews, and skills test storytelling. The results showed that the implementation of LEA strategy can improve the storytelling skills. The improvement is reflected in the quality of learning reflected in liveliness, attention and concentration, interest during learning, and the courage of students telling stories in front of the class.
Moreno-Sanz, Carlos; Seoane-González, Jose B
Although there are no clearly defined electronic tools for continuing medical education (CME), new information technologies offer a basic platform for presenting training content on the internet. Due to the shortage of websites about minimally invasive surgery in the Spanish language, we set up a topical website in Spanish. This study considers the experience with the website between April 2001 and January 2005. To study the activity of the website, the registry information was analyzed descriptively using the log files of the server. To study the characteristics of the users, we searched the database of registered users. We found a total of 107,941 visits to our website and a total of 624,895 page downloads. Most visits to the site were made from Spanish-speaking countries. The most frequent professional profile of the registered users was that of general surgeon. The development, implementation, and evaluation of Spanish-language CME initiatives over the internet is promising but presents challenges.
Many enterprises use their own domain concepts in modeling business process and use technology in specialized ways when they implement them in a Business Process Management (BPM) system.In contrast, BPM tools used for modeling and implementing business processes often provide a standard modeling...... and automation to BPM tools through a tool experiment in Danske Bank, a large financial institute; We develop business process modeling languages, tools and transformations that capture Danske Banks specific modeling concepts and use of technology, and which automate the generation of code. An empirical...... evaluation shows that Danske Bank will possibly gain remarkable improvements in development productivity and the quality of the implemented code.This leads us to the conclusion that BPM tools should provide flexibility to allow customization of languages, tools and transformations to the specific needs...
Buccino, Giovanni; Colagè, Ivan; Gobbi, Nicola; Bonaccorso, Giorgio
This work reviews key behavioural, neurophysiological and neuroimaging data on the neural substrates for processing the meaning of linguistic material, and tries to articulate the picture emerging from those findings with the notion of meaning coming from specific approaches in philosophy of language (the "internalist" view) and linguistics (words point at experiential clusters). The reviewed findings provide evidence in favour of a causal role of brain neural structures responsible for sensory, motor and even emotional experiences in attributing meaning to words expressing those experiences and, consequently, lend substantial support to an embodied and "internalist" conception of linguistic meaning. Key evidence concern verbs, nouns and adjectives with a concrete content, but the challenge that abstract domains pose to the embodied approach to language is also discussed. This work finally suggests that the most fundamental role of embodiment might be that of establishing commonalities among individual experiences of different members of a linguistic community, and that those experiences ground shared linguistic meanings. Copyright © 2016 Elsevier Ltd. All rights reserved.
Liyanapathirana, Jeevanthi; Popescu-Belis, Andrei
. To obtain a data set with spoken post-editing information, we use the French version of TED talks as the source texts submitted to MT, and the spoken English counterparts as their corrections, which are submitted to an ASR system. We experiment with various levels of artificial ASR noise and also...
Lu, Hongyan; Maithus, Caroline
Clinical tutors, referred to in the international literature as clinical supervisors, facilitators, mentors or instructors, are responsible for providing and supervising workplace learning opportunities for groups of Bachelor of Nursing (BN) students. They also play a key role in assessing students. The role modeling and support provided by both clinical tutors and registered nurses (RN) or nurse preceptors helps students become familiar with the language in which nursing work is realised. As BN student cohorts in New Zealand have become more diverse in terms of cultures, ethnicities and language backgrounds, clinical tutors have to directly facilitate the development of context-specific and client-focused communication skills for students who speak English as an additional language. We undertook a study which looked at the perceptions of new nursing graduates with English as an additional language (EAL) on the development of spoken language skills for the clinical workplace. As well as interviewing graduates, we spoke to four clinical tutors in order to elicit their views on the language development of EAL students in previous cohorts. This article reports on the themes which emerged from the interviews with the tutors. These include goal setting for communication, integrating students into nursing work, making assessment less stressful, and endorsing independent learning strategies. Based on their observations and on other published research we make some suggestions about ways both clinical tutors and EAL students within their teaching groups could be supported in the development of communication skills for clinical practice.
Kowal, Sabine; O'Connell, Daniel C
The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.
Mast, Marion; Maier, Elisabeth; Schmitz, Birte
This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...
Gong, Tao; Lam, Yau W.; Shuai, Lan
Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages. PMID:28066281
Rinaldi, Pasquale; Caselli, Cristina
We evaluated language development in deaf Italian preschoolers with hearing parents, taking into account the duration of formal language experience (i.e., the time elapsed since wearing a hearing aid and beginning language education) and different methods of language education. Twenty deaf children were matched with 20 hearing children for age and…
Algee, Lisa M.
English Language Learners (ELL) are often at a distinct disadvantage from receiving authentic science learning opportunites. This study explored English Language Learners (ELL) learning experiences with scientific language and inquiry within a real life context. This research was theoretically informed by sociocultural theory and literature on student learning and science teaching for ELL. A qualitative, case study was used to explore students' learning experiences. Data from multiple sources was collected: student interviews, science letters, an assessment in another context, field-notes, student presentations, inquiry assessment, instructional group conversations, parent interviews, parent letters, parent homework, teacher-researcher evaluation, teacher-researcher reflective journal, and student ratings of learning activities. These data sources informed the following research questions: (1) Does participation in an out-of-school contextualized inquiry science project increase ELL use of scientific language? (2) Does participation in an out-of-school contextualized inquiry science project increase ELL understanding of scientific inquiry and their motivation to learn? (3) What are parents' funds of knowledge about the local ecology and does this inform students' experiences in the science project? All data sources concerning students were analyzed for similar patterns and trends and triangulation was sought through the use of these data sources. The remaining data sources concerning the teacher-researcher were used to inform and assess whether the pedagogical and research practices were in alignment with the proposed theoretical framework. Data sources concerning parental participation accessed funds of knowledge, which informed the curriculum in order to create continuity and connections between home and school. To ensure accuracy in the researchers' interpretations of student and parent responses during interviews, member checking was employed. The findings
Yoder, Paul J; Woynaroski, Tiffany; Fey, Marc E; Warren, Steven F; Gardner, Elizabeth
In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only the participants with DS, we found that more therapy led to larger spoken vocabularies at posttreatment because it increased children's canonical syllabic communication and receptive vocabulary growth early in the treatment phase.
The present study investigates the language use and literacy practices of 36 children (aged three-and-a-half, seven and 11) from a Gujerati and Urdu-speaking Muslim community in north-east London. These experiences are explored in the children’s three-generation families, in the community and in school through interviews, recordings and observations. They are related to the children’s educational achievement and whether or not they make use of a local community cultural and religious centre. ...
Adesope, Olusola O.; Nesbit, John C.
An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…
Weber, A.C.; Cutler, A.
Six eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target
It is difficult to find the exact number of other languages spoken besides Dutch in the Netherlands. A study showed that a total of 96 other languages are spoken by students attending Dutch primary and secondary schools. The variety of languages spoken shows the growth of linguistic diversity in the
Montgomery, Joel R.
This paper will compare the uses of selected formal and informal assessments of English language learners (ELLs) in the Language Experience class [TRANSLANGEXP7(&8)-008] at Kimball Middle School, Illinois School District U-46, Elgin, Illinois, during school year 2007- 2008. See figure 1 (page 14) for a graphic display of these assessments…
Caudery, Tim; Petersen, Margrethe; Shaw, Philip
One point investigated in our research project on the linguistic experiences of exchange students in Denmark and Sweden is the reasons students have for coming on exchange. Traditionally, an important goal of student exchange was to acquire improved language skills usually in the language spoken...... in the host country. To what extent is this true when students plan to study in English in a non-English speaking country? Do they hope and expect to improve their English skills, their knowledge of the local language, both, or neither? to what extent are these expectations fulfilled? Results form the project...
Oviatt, S; Bernard, J; Levow, G A
Fragile error handling in recognition-based systems is a major problem that degrades their performance, frustrates users, and limits commercial potential. The aim of the present research was to analyze the types and magnitude of linguistic adaptation that occur during spoken and multimodal human-computer error resolution. A semiautomatic simulation method with a novel error-generation capability was used to collect samples of users' spoken and pen-based input immediately before and after recognition errors, and at different spiral depths in terms of the number of repetitions needed to resolve an error. When correcting persistent recognition errors, results revealed that users adapt their speech and language in three qualitatively different ways. First, they increase linguistic contrast through alternation of input modes and lexical content over repeated correction attempts. Second, when correcting with verbatim speech, they increase hyperarticulation by lengthening speech segments and pauses, and increasing the use of final falling contours. Third, when they hyperarticulate, users simultaneously suppress linguistic variability in their speech signal's amplitude and fundamental frequency. These findings are discussed from the perspective of enhancement of linguistic intelligibility. Implications are also discussed for corroboration and generalization of the Computer-elicited Hyperarticulate Adaptation Model (CHAM), and for improved error handling capabilities in next-generation spoken language and multimodal systems.
Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...
Brinkman, Nancy A.
During the preschool years, children experience great strides in their ability to use language. This booklet and companion videotape help teachers and parents recognize and support six High/Scope key experiences in language and literacy: (1) talking with others about personally meaningful experiences; (2) describing objects, events, and relations;…
Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.
Hall, Matthew L.; Ferreira, Victor S.; Mayberry, Rachel I.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech. PMID:25786230
Matthew L Hall
Full Text Available Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL. Experiment 1 shows that second language (L2 signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect. Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming operates similarly in sign and speech.
The article deals with revealing the peculiarities of language teachers' professional training in the context of British experience. The notions of philology, linguistics, philologist, linguist, language studies have been outlined and specified in the article. The titles of the curricula and their meanings in reference to language training have…
Technology has been used widely in the field of education for a long period of time. It is a useful tool which could be a mediation to help language learners to learn the target language. In order to investigate how technology and social experience can be integrated into courses to promote language learners' desire to learn English, the researcher…
Calabrich, Simone L.
This research explored perceptions of learners studying English in private language schools regarding the use of mobile technology to support language learning. Learners were first exposed to both a mobile assisted and a mobile unassisted language learning experience, and then asked to express their thoughts on the incorporation of mobile devices…
Lai, Chun; Hu, Xiao; Lyu, Boning
Out-of-class learning with technology comprises an essential context of second language development. Understanding the nature of out-of-class language learning with technology is the initial step towards safeguarding its quality. This study examined the types of learning experiences that language learners engaged in outside the classroom and the…
Smolicz, Jerzy J.
Reviews European Community and Australian language policies. Considers cultural-economic interface in Australia with respect to current interest in teaching Asian languages for trade purposes. Discusses Australia's growing acceptance of languages other than English and its affect on Aboriginal people. Urges the better utilization of the country's…
Seidenberg, Mark S.; MacDonald, Maryellen C.
This article reviews the important role of statistical learning for language and reading development. Although statistical learning--the unconscious encoding of patterns in language input--has become widely known as a force in infants' early interpretation of speech, the role of this kind of learning for language and reading comprehension in…
Lee, Tiffany S.
Native American languages, contemporary youth identity, and powerful messages from mainstream society and Native communities create complex interactions that require deconstruction for the benefit of Native-language revitalization. This study showed how Native youth negotiate mixed messages such as the necessity of Indigenous languages for…
Full Text Available This article presents a language experience and self-assessment of proficiency questionnaire for hearing teachers who use Brazilian Sign Language and Portuguese in their teaching practice. By focusing on hearing teachers who work in Deaf education contexts, this questionnaire is presented as a tool that may complement the assessment of linguistic skills of hearing teachers. This proposal takes into account important factors in bilingualism studies such as the importance of knowing the participant’s context with respect to family, professional and social background (KAUFMANN, 2010. This work uses as model the following questionnaires: LEAP-Q (MARIAN; BLUMENFELD; KAUSHANSKAYA, 2007, SLSCO – Sign Language Skills Classroom Observation (REEVES et al., 2000 and the Language Attitude Questionnaire (KAUFMANN, 2010, taking into consideration the different kinds of exposure to Brazilian Sign Language. The questionnaire is designed for bilingual bimodal hearing teachers who work in bilingual schools for the Deaf or who work in the specialized educational department who assistdeaf students.
The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…
Li, Le; Abutalebi, Jubin; Zou, Lijuan; Yan, Xin; Liu, Lanfang; Feng, Xiaoxia; Wang, Ruiming; Guo, Taomei; Ding, Guosheng
Previous neuroimaging studies have revealed that bilingualism induces both structural and functional neuroplasticity in the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (LCN), both of which are associated with cognitive control. Since these "control" regions should work together with other language regions during language processing, we hypothesized that bilingualism may also alter the functional interaction between the dACC/LCN and language regions. Here we tested this hypothesis by exploring the functional connectivity (FC) in bimodal bilinguals and monolinguals using functional MRI when they either performed a picture naming task with spoken language or were in resting state. We found that for bimodal bilinguals who use spoken and sign languages, the FC of the dACC with regions involved in spoken language (e.g. the left superior temporal gyrus) was stronger in performing the task, but weaker in the resting state as compared to monolinguals. For the LCN, its intrinsic FC with sign language regions including the left inferior temporo-occipital part and right inferior and superior parietal lobules was increased in the bilinguals. These results demonstrate that bilingual experience may alter the brain functional interaction between "control" regions and "language" regions. For different control regions, the FC alters in different ways. The findings also deepen our understanding of the functional roles of the dACC and LCN in language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mayberry, Rachel I; Davenport, Tristan; Roth, Austin; Halgren, Eric
The extent to which development of the brain language system is modulated by the temporal onset of linguistic experience relative to post-natal brain maturation is unknown. This crucial question cannot be investigated with the hearing population because spoken language is ubiquitous in the environment of newborns. Deafness blocks infants' language experience in a spoken form, and in a signed form when it is absent from the environment. Using anatomically constrained magnetoencephalography, aMEG, we neuroimaged lexico-semantic processing in a deaf adult whose linguistic experience began in young adulthood. Despite using language for 30 years after initially learning it, this individual exhibited limited neural response in the perisylvian language areas to signed words during the 300-400 ms temporal window, suggesting that the brain language system requires linguistic experience during brain growth to achieve functionality. The present case study primarily exhibited neural activations in response to signed words in dorsolateral superior parietal and occipital areas bilaterally, replicating the neural patterns exhibited by two previously case studies who matured without language until early adolescence (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014). The dorsal pathway appears to assume the task of processing words when the brain matures without experiencing the form-meaning network of a language. Copyright © 2018 Elsevier Ltd. All rights reserved.
Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.
Cocks, N.; Cruice, M.
There is a growing body of research which has investigated the experience of the migrant health worker. However, only one of these studies has included speech and language therapists thus far, and then only with extremely small numbers. The aim of this study was to explore the experiences and perspectives of migrant speech and language therapists living in the UK. Twenty-three overseas qualified speech and language therapists living in the UK completed an online survey consisting of 36 questi...
Watanabe, Shigeru; Yamamoto, Erico; Uozumi, Midori
Java sparrows (Padda oryzivora) were trained to discriminate English from Chinese spoken by a bilingual speaker. They could learn discrimination and showed generalization to new sentences spoken by the same speaker and those spoken by a new speaker. Thus, the birds distinguished between English and Chinese. Although auditory cues for the discrimination were not specified, this is the first evidence that non-mammalian species can discriminate human languages.
Conboy, Barbara T; Kuhl, Patricia K
Language experience 'narrows' speech perception by the end of infants' first year, reducing discrimination of non-native phoneme contrasts while improving native-contrast discrimination. Previous research showed that declines in non-native discrimination were reversed by second-language experience provided at 9-10 months, but it is not known whether second-language experience affects first-language speech sound processing. Using event-related potentials (ERPs), we examined learning-related changes in brain activity to Spanish and English phoneme contrasts in monolingual English-learning infants pre- and post-exposure to Spanish from 9.5-10.5 months of age. Infants showed a significant discriminatory ERP response to the Spanish contrast at 11 months (post-exposure), but not at 9 months (pre-exposure). The English contrast elicited an earlier discriminatory response at 11 months than at 9 months, suggesting improvement in native-language processing. The results show that infants rapidly encode new phonetic information, and that improvement in native speech processing can occur during second-language learning in infancy.
Kelly-Jackson, Charlease; Delacruz, Stacy
This original pedagogical study captured three preservice teachers' experiences using visual literacy strategies as an approach to teaching English language learners (ELLs) science academic language. The following research questions guided this study: (1) What are the experiences of preservice teachers' use of visual literacy to teach science…
Nomura, Saeko; Ishida, Saeko; Jensen, Mika Yasuoka
”Open Source Software Development with Your Mother Language: Intercultural Collaboration Experiment 2002,” 10th International Conference on Human – Computer Interaction (HCII2003), June 2003, Crete, Greece.......”Open Source Software Development with Your Mother Language: Intercultural Collaboration Experiment 2002,” 10th International Conference on Human – Computer Interaction (HCII2003), June 2003, Crete, Greece....
Johnson, E.K.; Westrek, E.S.M.; Nazzi, T.; Cutler, A.
A visual fixation study tested whether 7-month-olds can discriminate between different talkers. The infants were first habituated to talkers producing sentences in either a familiar or unfamiliar language, then heard test sentences from previously unheard speakers, either in the language used for
Potter, Christine E.; Wang, Tianlin; Saffran, Jenny R.
Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning…
The problem of applying communicative approach to foreign language teaching of students in non-language departments of higher education institutions in a number of countries has been analyzed in the paper. The brief overview of main historic milestones in the development of communicative approach has been presented. It has been found out that…
Ross, Andrew S.; Stracke, Elke
Within applied linguistics, understanding of motivation and cognition has benefitted from substantial attention for decades, but the attention received by language learner emotions has not been comparable until recently when interest in emotions and the role they can play in language learning has increased. Emotions are at the core of human…
Maguire, Mary H.; Curdt-Christiansen, Xiao Lan
This article focuses on the identity accounts of a group of Chinese children who attend a heritage language school. Bakhtin's concepts of ideological becoming, and authoritative and internally persuasive discourse, frame our exploration. Taking a dialogic view of language and learning raises questions about schools as socializing spaces and…
Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559
Full Text Available Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.
Arndt, Karen Barako; Schuele, C. Melanie
Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…
Casey, Laura Baylot; Bicard, David F.
Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…
Nava, Andrea; Pedrazzini, Luciana
We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…
Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra
-, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf
Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth
In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…
With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.
Wang, Zhen; Zechner, Klaus; Sun, Yu
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…
Full Text Available There is agreement among language educators that the process of language teaching and learning should aim to develop autonomous language learners. While the advantages of autonomy seem to be quite obvious, fostering autonomy in practice can prove to be difficult for some language learners. This paper describes the use of learning contracts as a strategy for enhancing learner autonomy among a group of ESL learners in a Malaysian university. Through learners’ account of their experiences with the contracts, the study concludes that the learning contract has potential use for language learning and that learners’ positive learning experience remains the key to the success of any endeavour seeking to promote learner autonomy. The paper ends with some implications for teachers and learners who wish to use the contracts as a strategy for language teaching and learning.
Pfau, R.; Steinbach, M.; Pfau, R.; Steinbach, M.; Herrmann, A.
Sign language grammars, just like spoken language grammars, generally provide various means to generate different kinds of complex syntactic structures including subordination of complement clauses, adverbial clauses, or relative clauses. Studies on various sign languages have revealed that sign
Sibieta, Luke; Kotecha, Mehul; Skipp, Amy
The Nuffield Early Language Intervention is designed to improve the spoken language ability of children during the transition from nursery to primary school. It is targeted at children with relatively poor spoken language skills. Three sessions per week are delivered to groups of two to four children starting in the final term of nursery and…
This study dealt with the experiences of immigrants from Latin America, specifically Mexico, who speak indigenous languages. This study was guided by a theoretical framework in terms of issues such as power struggle, cultural hierarchy, and identity ambiguity, which are social realities of indigenous people who have immigrated to the United…
Full Text Available Acculturation and language proficiency have been found to be inter-related both from the perspective of second language acquisition (Schumann, 1978, 1986 and socio-psychological adaptation in cross-cultural contacts (Ward, Bochner, & Furnham, 2001. However, the predictions as to the effect of a particular strategy on success differ, with assimilation believed to create most favourable conditions for SLA and integration for general well-being. The present study explores acculturation patterns in three expert users of English as a second language, recent Polish immigrants to the UK, in relation to their language experience. The qualitative data were collected with the use of a questionnaire and analysed with respect to language experience and socio-affective factors. The analysis aimed at better understanding of the relationship between language learning in a formal context and language use in a natural setting on the one hand and the relationship between language expertise and acculturation strategy choice on the other. The results show that in spite of individual differences, expert language users tend to adopt an assimilation rather than integration acculturation strategy. This may suggest that attitudes are related to expertise in English as a second language in a more conservative way than advocated by cross-cultural approaches.
... on populations and the numbers of people speaking each language. Features include: * * * * * nearly 600 languages identiﬁed as to where they are spoken and the family to which they belong over 200 languages individually described, with sample passages and English translation fascinating insights into the history and development of individual languages a...
Kimmelman, V.; Pfau, R.; Féry, C.; Ishihara, S.
This chapter demonstrates that the Information Structure notions Topic and Focus are relevant for sign languages, just as they are for spoken languages. Data from various sign languages reveal that, across sign languages, Information Structure is encoded by syntactic and prosodic strategies, often
An influential view of the nature of the language system is that of an evolved biological system in which a set of rules is combined with a lexicon that contains the words of the language together with a representation of their context. Alternative views, usually based on connectionist modeling, attempt to explain the structure of language on the basis of complex associative processes. Here, I put forward a third view that stresses experience-dependent structural development of the brain circuits supporting language as a core principle of the organization of the language system. In this view, embodied in a recent neuroconstructivist neural network of past tense development and processing, initial domain-general predispositions enable the development of functionally specialized brain structures through interactions between experience-dependent brain development and statistical learning in a structured environment. Together, these processes shape a biological adult language system that appears to separate into distinct mechanism for processing rules and exceptions, whereas in reality those subsystems co-develop and interact closely. This view puts experience-dependent brain development in response to a specific language environment at the heart of understanding not only language development but adult language processing as well. Copyright © 2016 Cognitive Science Society, Inc.
Aalberse, S.; Moro, F.; Braunmüller, K.; Höder, S.; Kühl, K.
This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often
Aalberse, S.; Moro, F.R.; Braunmüller, K.; Höder, S.; Kühl, K.
This article discusses Malay and Chinese heritage languages as spoken in the Netherlands. Heritage speakers are dominant in another language and use their heritage language less. Moreover, they have qualitatively and quantitatively different input from monolinguals. Heritage languages are often
Zheng, Yi; Samuel, Arthur G
Language and music are intertwined: music training can facilitate language abilities, and language experiences can also help with some music tasks. Possible language-music transfer effects are explored in two experiments in this study. In Experiment 1, we tested native Mandarin, Korean, and English speakers on a pitch discrimination task with two types of sounds: speech sounds and fundamental frequency (F0) patterns derived from speech sounds. To control for factors that might influence participants' performance, we included cognitive ability tasks testing memory and intelligence. In addition, two music skill tasks were used to examine general transfer effects from language to music. Prior studies showing that tone language speakers have an advantage on pitch tasks have been taken as support for three alternative hypotheses: specific transfer effects, general transfer effects, and an ethnicity effect. In Experiment 1, musicians outperformed non-musicians on both speech and F0 sounds, suggesting a music-to-language transfer effect. Korean and Mandarin speakers performed similarly, and they both outperformed English speakers, providing some evidence for an ethnicity effect. Alternatively, this could be due to population selection bias. In Experiment 2, we recruited Chinese Americans approximating the native English speakers' language background to further test the ethnicity effect. Chinese Americans, regardless of their tone language experiences, performed similarly to their non-Asian American counterparts in all tasks. Therefore, although this study provides additional evidence of transfer effects across music and language, it casts doubt on the contribution of ethnicity to differences observed in pitch perception and general music abilities.
Pring, Tim; Flood, Emma; Dodd, Barbara; Joffe, Victoria
Background: The majority of speech and language therapists (SLTs) work with children who have speech, language and communication needs. There is limited information about their working practices and clinical experience and their views of how changes to healthcare may impact upon their practice. Aims: To investigate the working practices and…
This paper is intended for researchers considering using ethnography as a methodology to investigate home literacy experiences of children learning English as a Second Language (ESL). After briefly setting ethnographic study in the context of English language learners' home literacy practices, I identify five opportunities and five potential…
This article discusses the importance of graduates' language skills and their European Regional Action Scheme for the Mobility of University Students (ERASMUS) experiences. The purpose of the research is to establish whether the potential benefits of ERASMUS participation for employability, particularly with regard to language skills, mean that…
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…
Doebel, Sabine; Zelazo, Philip David
Engaging executive function often requires overriding a prepotent response in favor of a conflicting but adaptive one. Language may play a key role in this ability by supporting integrated representations of conflicting rules. We tested whether experience with contrastive language that could support such representations benefits executive function in 3-year-old children. Children who received brief experience with language highlighting contrast between objects, attributes, and actions showed greater executive function on two of three 'conflict' executive function tasks than children who received experience with contrasting stimuli only and children who read storybooks with the experimenter, controlling for baseline executive function. Experience with contrasting stimuli did not benefit executive function relative to reading books with the experimenter, indicating experience with contrastive language, rather than experience with contrast generally, was key. Experience with contrastive language also boosted spontaneous attention to contrast, consistent with improvements in representing contrast. These findings indicate a role for language in executive function that is consistent with the Cognitive Complexity and Control theory's key claim that coordinating conflicting rules is critical to overcoming perseveration, and suggest new ideas for testing theories of executive function. Copyright © 2016 Elsevier B.V. All rights reserved.
Educational constructivism has long been associated with advanced pedagogy on the basis that, it champions a learner-centered approach to teaching, advocates learning in meaningful contexts and promotes problem-based activities where learners construct their knowledge through interaction with their peers. Involving language learners in video…
A course in introductory Greek was introduced as part of a freshman seminar program at William Patterson College of New Jersey. The course was distinctive in that the instructor undertook to learn the subject along with the students. The goal of the course was that the students would learn something about Greek, about language in general and about…
Smolicz, Jerzy J.
While it has been agreed by the members of the European Community (except the UK) that all secondary students should study two EC languages in addition to their own, in Australia the recent emphasis has been on teaching languages for external trade, particularly in the Asian region. This policy over-looks the 13 per cent of the Australian population who already speak a language other than English at home (and a greater number who are second generation immigrants), and ignores the view that it is necessary to foster domestic multiculturalism in order to have fruitful links with other cultures abroad. During the 1980s there have been moves to reinforce the cultural identity of Australians of non-English speaking background, but these have sometimes been half-hearted and do not fully recognise that cultural core values, including language, have to achieve a certain critical mass in order to be sustainable. Without this recognition, semi-assimilation will continue to waste the potential cultural and economic contributions of many citizens, and to lead to frustration and eventual violence. The recent National Agenda for a Multicultural Australia addresses this concern.
Sato, Takahiro; Hodge, Samuel R.
The purpose of the current study was to describe and explain the views on teaching English Language Learners (ELLs) held by six elementary physical education (PE) teachers in the Midwest region of the United States. Situated in positioning theory, the research approach was descriptive-qualitative. The primary sources of data were face-to-face…
Full Text Available As an experienced teacher of advanced learners of English I am deeply aware of recurrent problems which these learners experience as regards grammatical accuracy. In this paper, I focus on researching inaccuracies in the use of verbal categories. I draw the data from a spoken learner corpus LINDSEI_CZ and analyze the performance of 50 advanced (C1–C2 learners of English whose mother tongue is Czech. The main method used is Computer-aided Error Analysis within the larger framework of Learner Corpus Research. The results reveal that the key area of difficulty is the use of tenses and tense agreements, and especially the use of the present perfect. Other error-prone aspects are also described. The study also identifies a number of triggers which may lie at the root of the problems. The identification of these triggers reveals deficiencies in the teaching of grammar, mainly too much focus on decontextualized practice, use of potentially confusing rules, and the lack of attempt to deal with broader notions such as continuity and perfectiveness. Whilst the study is useful for the teachers of advanced learners, its pedagogical implications stretch to lower levels of proficiency as well.
Lababidi, Rola Ahmed
This case study explores and investigates the perceptions and experiences of foreign language anxiety (FLA) among students of English as a Foreign Language in a Higher Education Institution in the United Arab Emirates. The first phase explored the scope and severity of language anxiety among all Foundation level male students at a college in the…
Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; He, Qinghua; Wei, Miao; Zhang, Mingxia; Dong, Qi; Chen, Chuansheng
Previous studies have suggested differential engagement of addressed and assembled phonologies in reading Chinese and alphabetic languages (e.g., English) and the modulatory role of native language in learning to read a second language. However, it is not clear whether native language experience shapes the neural mechanisms of addressed and assembled phonologies. To address this question, we trained native Chinese and native English speakers to read the same artificial language (based on Korean Hangul) either through addressed (i.e., whole-word mapping) or assembled (i.e., grapheme-to-phoneme mapping) phonology. We found that, for both native Chinese and native English speakers, addressed phonology relied on the regions in the ventral pathway, whereas assembled phonology depended on the regions in the dorsal pathway. More importantly, we found that the neural mechanisms of addressed and assembled phonologies were shaped by native language experience. Specifically, two key regions for addressed phonology (i.e., the left middle temporal gyrus and right inferior temporal gyrus) showed greater activation for addressed phonology in native Chinese speakers, while one key region for assembled phonology (i.e., the left supramarginal gyrus) showed more activation for assembled phonology in native English speakers. These results provide direct neuroimaging evidence for the effect of native language experience on the neural mechanisms of phonological access in a new language and support the assimilation-accommodation hypothesis. PMID:25858447
This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.
Sanden, Guro Refsum
Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...
Full Text Available With the widespread use of mobile phones and portable devices it is inevitable to think of Mobile Assisted Language Learning as a means of independent learning in Higher Education. Nowadays many learners are keen to explore the wide variety of applications available in their portable and always readily available mobile phones and tablets. The fact that they are keen to take control of their learning and autonomy is thought to lead to greater motivation and engagement, and the link with games-based learning suggests that the fun factor involved should not be overseen. This paper focuses on the use of mobile applications for independent language learning in higher education. It investigates how learners use mobile apps in line with their classes to enhance their learning experience. We base our analysis on a survey carried out in autumn 2013 in which 286 credited and non-credited language students from various levels of proficiency at The University of Manchester express their perceptions on the advantages and disadvantages of the use of mobile applications for independent language learning, together with examples of useful apps and suggestions of how these could be integrated in the language class.
Sherwood, Bruce Arne
Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)
Antovich, Dylan M.; Graf Estes, Katharine
Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable-level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14-month-olds'…
Laursen, Helle Pia
and conceptualizations of language and literacy in research on (second) language acquisition. When examining children’s first language acquisition, spoken language has been the primary concern in scholarship: a child acquires oral language first and written language follows later, i.e. language precedes literacy....... On the other hand, many second or foreign language learners learn mostly through written language or learn spoken and written language at the same time. Thus the connections between spoken and written (and visual) modalities, i.e. between language and literacy, are complex in research on language acquisition......Moving conceptualizations of language and literacy in SLA In this colloquium, we aim to problematize the concepts of language and literacy in the field that is termed “second language” research and seek ways to critically connect the terms. When considering current day language use for example...
Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/
Socorro Cláudia Tavares de Sousa
Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.
Kim, Su Yeong; Hou, Yang; Shen, Yishan; Zhang, Minyu
Language brokering occurs frequently in immigrant families and can have significant implications for the well-being of family members involved. The present study aimed to develop and validate a measure that can be used to assess multiple dimensions of subjective language brokering experiences among Mexican American adolescents. Participants were 557 adolescent language brokers (54.2% female, Mage.wave1 = 12.96, SD = .94) in Mexican American families. Using exploratory and confirmatory factor analyses, we were able to identify 7 reliable subscales of language brokering: linguistic benefits, socioemotional benefits, efficacy, positive parent-child relationships, parental dependence, negative feelings , and centrality . Tests of factorial invariance show that these subscales demonstrate, at minimum, partial strict invariance across time and across experiences of translating for mothers and fathers, and in most cases, also across adolescent gender, nativity, and translation frequency. Thus, in general, the means of the subscales and the relations among the subscales with other variables can be compared across these different occasions and groups. Tests of criterion-related validity demonstrated that these subscales correlated, concurrently and longitudinally, with parental warmth and hostility, parent-child alienation, adolescent family obligation, depressive symptoms, resilience, and life meaning. This reliable and valid subjective language brokering experiences scale will be helpful for gaining a better understanding of adolescents' language brokering experiences with their mothers and fathers, and how such experiences may influence their development. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Book review. Neurolinguistics. An Introduction to Spoken Language Processing and its Disorders, John Ingram. Cambridge University Press, Cambridge (Cambridge Textbooks in Linguistics) (2007). xxi + 420 pp., ISBN 978-0-521-79640-8 (pb)
The present textbook is one of the few recent textbooks in the area of neurolinguistics and will be welcomed by teachers of neurolinguistic courses as well as researchers interested in the topic. Neurolinguistics is a huge area, and the boundaries between psycho- and neurolinguistics are not sharp. Often the term neurolinguistics is used to refer to research involving neuropsychological patients suffering from some sort of language disorder or impairment. Also, the term neuro- rather than psy...
Leybaert, Jacqueline; D'Hondt, Murielle
Recent investigations have indicated a relationship between the development of cerebral lateralization for processing language and the level of development of linguistic skills in hearing children. The research on cerebral lateralization for language processing in deaf persons is compatible with this view. We have argued that the absence of appropriate input during a critical time window creates a risk for deaf children that the initial bias for left-hemisphere specialization will be distorted or disappear. Two experiments were conducted to test this hypothesis The results of these investigations showed that children educated early and intensively with cued speech or with sign language display more evidence of left-hemisphere specialization for the processing of their native language than do those who have been exposed later and less intensively to those languages.
González-Alvarez, Julio; Palomar-García, María-Angeles
Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.
This research thesis presents a computer system for the acquisition of experimental data. It is aimed at acquiring, at processing and at storing information from particle detectors. The acquisition configuration is described by an experiment description language. The system comprises a lexical analyser, a syntactic analyser, a translator, and a data processing module. It also comprises a control language and a statistics management and plotting module. The translator builds up series of tables which allow, during an experiment, different sequences to be executed: experiment running, calculations to be performed on this data, building up of statistics. Short execution time and ease of use are always looked for [fr
Schuit, J.; Baker, A.; Pfau, R.
Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different
This article explores the implications of Hegel's theories of language on second language (L2) teaching. Three among the various concepts in Hegel's theories of language are selected. They are the crucial role of intersubjectivity; the primacy of the spoken over the written form; and the importance of the training of form or grammar. Applying…
Full Text Available The focus of this report is the link between CLIL (Content and Language Integrated Learning and CALL (Computer-Assisted Language Learning, and in particular, the added value technologies can bring to the learning/ teaching of a foreign language and to the delivery of subject content through a foreign language. An example of a free online global training initiative on these topics will be described: “Techno-CLIL for EVO 2016”. An overview of the course will be offered, detailing some of the asynchronous and synchronous activities proposed during the five-week training experience which registered about 5000 participants from all over the world. Special attention will be devoted to the feedback from the teachers on how this experience helped their professional growth as reflective practitioners.
Conti-Ramsden, Gina; Durkin, Kevin
Purpose: This study examined the postschool educational and employment experiences of young people with and without specific language impairment (SLI). Method: Nineteen-year-olds with (n = 50) and without (n = 50) SLI were interviewed on their education and employment experiences since finishing compulsory secondary education. Results: On average,…
Sendurur, Emine; Efendioglu, Esra; Çaliskan, Neslihan Yondemir; Boldbaatar, Nomin; Kandin, Emine; Namazli, Sevinç
This study is designed to understand the informal language learners' experiences of m-learning applications. The aim is two-folded: (i) to extract the reasons why m-learning applications are preferred and (ii) to explore the user experience of Duolingo m-learning application. We interviewed 18 voluntary Duolingo users. The findings suggest that…
Full Text Available The aim of this paper is to report on an in-service English Language Teacher Training Programme devised for the Government project to equip Italian primary school teachers with the skills to teach English. The paper focuses on the first phase of the project which envisaged research into the best training models and the preparation of appropriate English Language syllabuses. In the first three sections of the paper we report on the experience of designing the language syllabus. In the last section we suggest ways of using the syllabus as a tool for self reflective professional development.
The Aboriginal English spoken by Indigenous children in remote communities in the Northern Territory of Australia is influenced by the home languages spoken by themselves and their families. This affects uses of spatial terms used in mathematics such as `in front' and `behind.' Speakers of the endangered Indigenous Australian language Iwaidja use the intrinsic frame of reference in contexts where speakers of Standard Australian English use the relative frame of reference. Children speaking Aboriginal English show patterns of use that parallel the Iwaidja contexts. This paper presents detailed examples of spatial descriptions in Iwaidja and Aboriginal English that demonstrate the parallel patterns of use. The data comes from a study that investigated how an understanding of spatial frame of reference in Iwaidja could assist teaching mathematics to Indigenous language-speaking students. Implications for teaching mathematics are explored for teachers without previous experience in a remote Indigenous community.
Hirshorn, Elizabeth A.; Dye, Matthew W. G.; Hauser, Peter; Supalla, Ted R.; Bavelier, Daphne
While reading is challenging for many deaf individuals, some become proficient readers. Little is known about the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English,’ in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as memory measures that rely differentially on phonological (serial recall) and semantic (free recall) processing, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with free recall being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers. Highlights: 1. Deaf individuals vary in their orthographic and phonological knowledge of English as a function of their language experience. 2. Reading comprehension was best predicted by different factors in oral deaf and
Cenoz, Jasone; Gorter, Durk
This paper focuses on the linguistic landscape of two streets in two multilingual cities in Friesland (Netherlands) and the Basque Country (Spain) where a minority language is spoken, Basque or Frisian. The paper analyses the use of the minority language (Basque or Frisian), the state language (Spanish or Dutch) and English as an international…
Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao
The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.
Jongbloed-Faber, L.; Van de Velde, H.; van der Meer, C.; Klinkenberg, E.L.
This paper explores the use of Frisian, a minority language spoken in the Dutch province of Fryslân, on social media by Frisian teenagers. Frisian is the mother tongue of 54% of the 650,000 inhabitants and is predominantly a spoken language: 64% of the Frisian population can speak it well, while
Callejas, Zoraida; Griol, David; López-Cózar, Ramón
In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.
Full Text Available Abstract In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.
Glenn-Applegate, Katherine; Breit-Smith, Allison; Justice, Laura M.; Piasta, Shayne B.
Research Findings: Artfulness is rarely considered as an indicator of quality in young children's spoken narratives. Although some studies have examined artfulness in the narratives of children 5 and older, no studies to date have focused on the artfulness of preschoolers' oral narratives. This study examined the artfulness of fictional spoken…
Pitkajarvi, Marianne; Eriksson, Elina; Kekki, Pertti
The purpose of this study was to research teachers' experiences of the English-Language-Taught Degree Programs in the health care sector of Finnish polytechnics. More specifically, the focus was on teachers' experiences of teaching methods and clinical practice. The data were collected from eighteen teachers in six polytechnics through focus group interviews. Content analysis was used to analyse the data. The results suggested that despite the positive interaction between students and teachers, choosing appropriate teaching methods provided a challenge for teachers, due to cultural diversity of students as well as to the use of a foreign language in tuition. Due to students' language-related difficulties, clinical practice was found to be the biggest challenge in the educational process. Staffs' attitudes were perceived to be significant for students' clinical experience. Further research using stronger designs is needed. Copyright © 2010 Elsevier Ltd. All rights reserved.
Early experience with speech and language, starting in the womb, has been shown to shape perceptual and learning abilities, paving the way for language development. Indeed, recent studies suggest that prenatal experience with speech, which consists mainly of prosodic information, already impacts how newborns perceive speech and produce communicative sounds. Similarly, the newborn brain already shows specialization for speech processing, resembling that of the adult brain. Yet, newborns' early preparedness for speech is broad, comprising many universal perceptual abilities. During the first years of life, experience narrows down speech perception, allowing the child to become a native listener and speaker. Concomitantly, the neural correlates of speech and language processing become increasingly specialized. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sinaiko, H. Wallace; Brislin, Richard W.
This paper documents the results of a series of experiments conducted by the Institute for Defense Analyses on translating technical material from English to Vietnamese. The work was accomplished in support of the Office of the Deputy Director, Research and Engineering, Deputy Director for Southeast Asia Matters. The paper addresses the question…
Iyiola Amos Damilare
Full Text Available Substitution is a phonological process in language. Existing studies have examined deletion in several languages and dialects with less attention paid to the spoken French of Ijebu Undergraduates. This article therefore examined substitution as a dominant phenomenon in the spoken French of thirty-four Ijebu Undergraduate French Learners (IUFLs in Selected Universities in South West of Nigeria with a view to establishing the dominance of substitution in the spoken French of IUFLs. The data collection was through tape-recording of participants’ production of 30 sentences containing both French vowel and consonant sounds. The results revealed inappropriate replacement of vowel and consonant in the medial and final positions in the spoken French of IUFLs.
Jermaine S. McDougald
Full Text Available This paper is a preliminary report on the “CLIL State-of-the-Art” project in Colombia, drawing on data collected from 140 teachers’ regarding their attitudes toward, perceptions of and experiences with CLIL (content and language integrated learning. The term CLIL is used here to refer to teaching contexts in which a foreign language (in these cases, English is the medium for the teaching and learning of non-language subjects. The data that has been gathered thus far reveals that while teachers presently know very little about CLIL, they are nevertheless actively seeking informal and formal instruction on CLIL. Many of the surveyed teachers are currently teaching content areas through English; approximately half of them reported having had positive experiences teaching content and language together, though the remainder claimed to lack sufficient knowledge in content areas. Almost all of the participants agreed that the CLIL approach can benefit students, helping them develop both language skills and subject knowledge (meaningful communication. However, there is still considerable uncertainty as to the actual state-of-the-art of CLIL in Colombia; greater clarity here will enable educators and decision-makers to make sound decisions for the future of general and language education.
The EPRI Modular Modeling System (MMS) code represents a collection of component models and a steam/water properties package. This code has undergone extensive verification and validation testing. Currently, the code requires a commercially available simulation language to run. The Philadelphia Electric Company (PECO) has been modeling power plant systems for over the past sixteen years. As a result, an extensive number of models have been developed. In addition, an extensive amount of experience has been developed and gained using an in-house simulation language. The objective of this study was to explore the possibility of developing an MMS pre-processor which would allow the use of the MMS package with other simulation languages such as the PECO in-house simulation language
Itani, Nada; Khalil, Mohammad; Sodemann, Morten
Background: Denmark has become a multicultural society over the past three decades, with 12.8% of the population being immigrants and their descendants. Many of these risk inequality in access to health and in health outcomes because of language barriers. The quality of healthcare interpreting se...... for healthcare interpretation. Those eligible should receive additional training, including technical language skills. All interpreters should be required to undergo testing of their linguistic skills to work professionally as healthcare interpreters....... services has recently been discussed by politicians and the media. The present explorative study investigated the sociodemographic characteristics, level of experience and linguistic skills of Arabic-speaking healthcare interpreters in Denmark. Method: Snowball sampling (including social media) was used...... to recruit interpreters. Data were collected through individual telephone interviews based on an interview guide containing structured and semi-structured questions. Interpreters’ language skills were assessed subjectively based on the flow of the interview and preferred interview language. Results...
Mst. Moriam, Quadir
This study discusses motivation and strategy use of university students to learn spoken English in Bangladesh. A group of 355 (187 males and 168 females) university students participated in this investigation. To measure learners' degree of motivation a modified version of questionnaire used by Schmidt et al. (1996) was administered. Participants reported their strategy use on a modified version of SILL, the Strategy Inventory for Language Learning, version 7.0 (Oxford, 1990). In order to fin...
Percy-Smith, Lone; Cayé-Thomasen, Per; Breinegaard, Nina
The present study demonstrates a very strong effect of the parental communication mode on the auditory capabilities and speech/language outcome for cochlear implanted children. The children exposed to spoken language had higher odds of scoring high in all tests applied and the findings suggest...... a very clear benefit of spoken language communication with a cochlear implanted child....
van Loon, E.; Pfau, R.; Steinbach, M.; Müller, C.; Cienki, A.; Fricke, E.; Ladewig, S.H.; McNeill, D.; Bressem, J.
Recent studies on grammaticalization in sign languages have shown that, for the most part, the grammaticalization paths identified in sign languages parallel those previously described for spoken languages. Hence, the general principles of grammaticalization do not depend on the modality of language
Zhang, Qingfang; Wang, Cheng
The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.
Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.
Wiseheart, Rebecca; Altmann, Lori J P
Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.
Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G
Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous.......g., lobe) faster than words with consistent rhymes where the vowel has a less typical spelling (e.g., loaf). The present study extends previous literature by showing that auditory word recognition is affected by orthographic regularities at different grain sizes, just like written word recognition...... and spelling. The theoretical and methodological implications for future research in spoken word recognition are discussed....
Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; Chen, Chuansheng; Wei, Miao; He, Qinghua; Dong, Qi
Previous studies have suggested differential engagement of the bilateral fusiform gyrus in the processing of Chinese and English. The present study tested the possibility that long-term experience with Chinese language affects the fusiform laterality of English reading by comparing three samples: Chinese speakers, English speakers with Chinese experience, and English speakers without Chinese experience. We found that, when reading words in their respective native language, Chinese and English speakers without Chinese experience differed in functional laterality of the posterior fusiform region (right laterality for Chinese speakers, but left laterality for English speakers). More importantly, compared with English speakers without Chinese experience, English speakers with Chinese experience showed more recruitment of the right posterior fusiform cortex for English words and pseudowords, which is similar to how Chinese speakers processed Chinese. These results suggest that long-term experience with Chinese shapes the fusiform laterality of English reading and have important implications for our understanding of the cross-language influences in terms of neural organization and of the functions of different fusiform subregions in reading. PMID:25598049
Full Text Available Abstract Teens who post on the popular social networking site Facebook in their home environment often continue to do so on second language study abroad sojourns. These sojourners use Facebook to document and make sense of their experiences in the host culture and position themselves with respect to language(s and culture(s. This study examined one teen’s identity positioning through her Facebook posts from two separate study abroad experiences in Germany. Data sources included her Facebook posts from both sojourns and a written reflection completed upon return from the second sojourn. Findings revealed that this teen used Facebook posts to position herself as a German-English bilingual and a member of an imagined community of German-English bilinguals by making a choice on which language(s to use, reporting her linguistic successes and challenges, and indicating growing language awareness. This study addresses the call by study abroad researchers (Coleman, 2013; Kinginger, 2009, 2013; Mitchell, Tracy-Ventura, & McManus, 2015 to investigate the effects of social media, such as Facebook, as part of the contemporary culture of study abroad, and sheds light on the role it plays, especially regarding second language identity positioning. Résumé Les adolescents qui affichent sur le site social Facebook dans leur environnement familial continuent à le faire pendant leur séjour à l'étranger. Ces adolescents utilisent Facebook pour documenter et réfléchir sur leurs expériences dans le pays hôte et pour se positionner par rapport à leur langue et à leur culture ou aux langues et aux cultures. Cette étude a examiné le positionnement d'une adolescente par rapport à son identité à travers des messages Facebook lors de deux séjours différents en Allemagne. Les données de ces expériences incluent des messages Facebook provenant des deux séjours et une réflexion écrite complétée à son retour du deuxième séjour. Les résultats ont
Hutka, Stefanie; Carpentier, Sarah M; Bidelman, Gavin M; Moreno, Sylvain; McIntosh, Anthony R
Musicianship has been associated with auditory processing benefits. It is unclear, however, whether pitch processing experience in nonmusical contexts, namely, speaking a tone language, has comparable associations with auditory processing. Studies comparing the auditory processing of musicians and tone language speakers have shown varying degrees of between-group similarity with regard to perceptual processing benefits and, particularly, nonlinguistic pitch processing. To test whether the auditory abilities honed by musicianship or speaking a tone language differentially impact the neural networks supporting nonlinguistic pitch processing (relative to timbral processing), we employed a novel application of brain signal variability (BSV) analysis. BSV is a metric of information processing capacity and holds great potential for understanding the neural underpinnings of experience-dependent plasticity. Here, we measured BSV in electroencephalograms of musicians, tone language-speaking nonmusicians, and English-speaking nonmusicians (controls) during passive listening of music and speech sound contrasts. Although musicians showed greater BSV across the board, each group showed a unique spatiotemporal distribution in neural network engagement: Controls had greater BSV for speech than music; tone language-speaking nonmusicians showed the opposite effect; musicians showed similar BSV for both domains. Collectively, results suggest that musical and tone language pitch experience differentially affect auditory processing capacity within the cerebral cortex. However, information processing capacity is graded: More experience with pitch is associated with greater BSV when processing this cue. Higher BSV in musicians may suggest increased information integration within the brain networks subserving speech and music, which may be related to their well-documented advantages on a wide variety of speech-related tasks.
Jung, Karl Gerhard
Academic language is the language that students must engage in while participating in the teaching and learning that takes place in school (Schleppegrell, 2012) and science as a content area presents specific challenges and opportunities for students to engage with language (Buxton & Lee, 2014; Gee, 2005). In order for students to engage authentically and fully in the science learning that will take place in their classrooms, it is important that they develop their abilities to use science academic language (National Research Council, 2012). For this to occur, teachers must provide support to their students in developing the science academic language they will encounter in their classrooms. Unfortunately, this type of support remains a challenge for many teachers (Baecher, Farnsworth, & Ediger, 2014; Bigelow, 2010; Fisher & Frey, 2010) and teachers must receive professional development that supports their abilities to provide instruction that supports and scaffolds students' science academic language use and development. This study investigates an elementary science teacher's engagement in an instructional coaching partnership to explore how that teacher planned and implemented scaffolds for science academic language. Using a theoretical framework that combines the literature on scaffolding (Bunch, Walqui, & Kibler, 2015; Gibbons, 2015; Sharpe, 2001/2006) and instructional coaching (Knight, 2007/2009), this study sought to understand how an elementary science teacher plans and implements scaffolds for science academic language, and the resources that assisted the teacher in planning those scaffolds. The overarching goal of this work is to understand how elementary science teachers can scaffold language in their classroom, and how they can be supported in that work. Using a classroom teaching experiment methodology (Cobb, 2000) and constructivist grounded theory methods (Charmaz, 2014) for analysis, this study examined coaching conversations and classroom
Nelson, Lori A.
Speech-language pathology literature is limited in describing the clinical practicum process from the student perspective. Much of the supervision literature in this field focuses on quantitative research and/or the point of view of the supervisor. Understanding the student experience serves to enhance the quality of clinical supervision. Of…
Le Pichon Vorstman, E.; de Swart, H.; Ceginskas, V.; van den Bergh, H.
What is the influence of a language learning experience (LLE) in a school context on the metacognitive development of children? To answer that question, we presented 54 multilingual preschoolers with two movie clips and examined their reactions to an exolingual situation of communication. These
Cheng, Rui; Erben, Antony
It is very common for Chinese graduate students to experience language anxiety in the U.S. higher institutions, yet the literature on this topic is limited. This research study focused on the influence of the length of stay in U.S. higher institutions, various programs, gender, and acculturation process on Chinese graduate students' language…
This research is to explore and understand participants' experience using YouTube to learn a foreign language. YouTube and learning has become more and more popular in the recent years. The finding of this research will be adding more understanding to the emerging body of knowledge of YouTube phenomenon. In this research, there are three…
Clinton, Lisa C.; Higbee, Jeanne L.
This manuscript discusses from the joint perspectives of an undergraduate student and a faculty member the often invisible role that language can play in providing postsecondary learning experiences that can either include or exclude students on the basis of social identity. The authors discuss ignorance, uncertainty, and political correctness as…
Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie
Purpose The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar vs. unfamiliar referents, and whether successful word-learning is associated with increased second-language experience. Method Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically-familiar novel words (constructed using English sounds) or phonologically-unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition-task. A median-split procedure identified high-ability and low-ability word-learners in each condition, and the two groups were compared on measures of second-language experience. Results Findings suggest that the ability to accurately match newly-learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: Where phonologically-unfamiliar novel words were paired with familiar referents. Conclusions Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents, and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults. PMID:22992709
Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie
The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar versus unfamiliar referents and whether successful word learning is associated with increased second-language experience. Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically familiar novel words (constructed using English sounds) or phonologically unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition task. A median-split procedure identified high-ability and low-ability word learners in each condition, and the two groups were compared on measures of second-language experience. Findings suggest that the ability to accurately match newly learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: in which phonologically unfamiliar novel words were paired with familiar referents. Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults.
Lau, Newman M. L.; Chu, Veni H. T.
This research aimed at investigating the method of using kinetic typography and interactive approach to conduct a design experiment for children to learn vocabularies. Typography is the unique art and technique of arranging type in order to make language visible. By adding animated movement to characters, kinetic typography expresses language…
Globalisation and increased patterns of immigration have turned workplace interactions to arenas for intercultural communication entailing negotiation of identity, membership and "social capital". For many newcomer immigrants, this happens in an additional language and culture--English. This paper presents interaction experiences of four…
Coene, Martine; Schauwers, Karen; Gillis, Steven; Rooryck, Johan; Govaerts, Paul J.
Recent neurobiological studies have advanced the hypothesis that language development is not continuously plastic but is governed by biological constraints that may be modified by experience within a particular time window. This hypothesis is tested based on spontaneous speech data from deaf cochlear-implanted (CI) children with access to…
A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…
Brouwer, Susanne; Bradlow, Ann R.
This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…
Yip, Michael C.
Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…
Singh, Anne-Marie; Marcus, Nadine; Ayres, Paul
Two experiments involving 125 grade-10 students learning about commerce investigated strategies to overcome the transient information effect caused by explanatory spoken text. The transient information effect occurs when learning is reduced as a result of information disappearing before the learner has time to adequately process it, or link it…
Möller, S.; Smeele, P.; Boland, H.; Krebber, J.
In the present paper, we investigate the validity and reliability of de-facto evaluation standards, defined for measuring or predicting the quality of the interaction with spoken dialogue systems. Two experiments have been carried out with a dialogue system for controlling domestic devices. During
Full Text Available The mastery of speaking skills in English has become a major requisite in engineering industry. Engineers are expected to possess speaking skills for executing their routine activities and career prospects. The article focuses on the experimental study conducted to improve English spoken proficiency of Indian engineering students using task-based approach. Tasks are activities that concentrates on the learners in providing the main context and focus for learning. Therefore, a task facilitates the learners to use language rather than to learn it. This article further explores the pivotal role played by the pedagogical intervention in enabling the learners to improve their speaking skill in L2. The participants of the study chosen for control and experimental group were first year civil engineering students comprising 38 in each group respectively. The vital tool used in the study is oral communicative tasks administered to the experimental group. The oral communicative tasks enabled the students to think and generate sentences on their own orally. The‘t’ Test was computed to compare the performance of the students in control and experiment groups.The results of the statistical analysis revealed that there was a significant level of improvement in the oral proficiency of the experimental group.
Rakhlin, Natalia; Hein, Sascha; Doyle, Niamh; Hart, Lesley; Macomber, Donna; Ruchkin, Vladislav; Tan, Mei; Grigorenko, Elena L
We compared English language and cognitive skills between internationally adopted children (IA; mean age at adoption=2.24, SD=1.8) and their non-adopted peers from the US reared in biological families (BF) at two time points. We also examined the relationships between outcome measures and age at initial institutionalization, length of institutionalization, and age at adoption. On measures of general language, early literacy, and non-verbal IQ, the IA group performed significantly below their age-peers reared in biological families at both time points, but the group differences disappeared on receptive vocabulary and kindergarten concept knowledge at the second time point. Furthermore, the majority of children reached normative age expectations between 1 and 2 years post-adoption on all standardized measures. Although the age at adoption, age of institutionalization, length of institutionalization, and time in the adoptive family all demonstrated significant correlations with one or more outcome measures, the negative relationship between length of institutionalization and child outcomes remained most robust after controlling for the other variables. Results point to much flexibility and resilience in children's capacity for language acquisition as well as the potential primacy of length of institutionalization in explaining individual variation in IA children's outcomes. (1) Readers will be able to understand the importance of pre-adoption environment on language and early literacy development in internationally adopted children. (2) Readers will be able to compare the strength of the association between the length of institutionalization and language outcomes with the strength of the association between the latter and the age at adoption. (3) Readers will be able to understand that internationally adopted children are able to reach age expectations on expressive and receptive language measures despite adverse early experiences and a replacement of their first
Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice
These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.
Kyle Tran Myhre
Full Text Available In "Dust," spoken word poet Kyle "Guante" Tran Myhre crafts a multi-vocal exploration of the connections between the internment of Japanese Americans during World War II and the current struggles against xenophobia in general and Islamophobia specifically. Weaving together personal narrative, quotes from multiple voices, and "verse journalism" (a term coined by Gwendolyn Brooks, the poem seeks to bridge past and present in order to inform a more just future.
Full Text Available The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005 which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors. In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1. In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2 but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2, low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate
Full Text Available This paper explores the auditory lexical access of mono-morphemic compounds in Chinese as a way of understanding the role of orthography in the recognition of spoken words. In traditional Chinese linguistics, a compound is a word written with two or more characters whether or not they are morphemic. A monomorphemic compound may either be a binding word, written with characters that only appear in this one word, or a non-binding word, written with characters that are chosen for their pronunciation but that also appear in other words. Our goal was to determine if this purely orthographic difference affects auditory lexical access by conducting a series of four experiments with materials matched by whole-word frequency, syllable frequency, cross-syllable predictability, cohort size, and acoustic duration, but differing in binding. An auditory lexical decision task (LDT found an orthographic effect: binding words were recognized more quickly than non-binding words. However, this effect disappeared in an auditory repetition and in a visual LDT with the same materials, implying that the orthographic effect during auditory lexical access was localized to the decision component and involved the influence of cross-character predictability without the activation of orthographic representations. This claim was further confirmed by overall faster recognition of spoken binding words in a cross-modal LDT with different types of visual interference. The theoretical and practical consequences of these findings are discussed.
Diaz-Campos, Manuel; Killam, Jason
This investigation contributes to the understanding of language attitudes toward consonantal deletion by examining its perception using a matched-guise experiment (Casesnoves and Sankoff 2004; Lambert, Hodgson, Gardner, and Fillenbaum 1960) with fifteen listeners. Two experiments were designed for testing language attitudes, one toward…
Negative attitudes toward stuttering and people who stutter (PWS) are found in various groups of people in many regions. However the results of previous studies examining the influence of fluency coursework and clinical certification on the attitudes of speech-language pathologists (SLPs) toward PWS are equivocal. Furthermore, there have been few empirical studies on the attitudes of Korean SLPs toward stuttering. To determine whether the attitudes of Korean SLPs and speech-language pathology students toward stuttering would be different according to the status of clinical certification, stuttering coursework completion and clinical practicum in stuttering. Survey data from 37 certified Korean SLPs and 70 undergraduate students majoring in speech-language pathology were analysed. All the participants completed the modified Clinician Attitudes Toward Stuttering (CATS) Inventory. Results showed that the diagnosogenic view was still accepted by many participants. Significant differences were found in seven out of 46 CATS Inventory items according to the certification status. In addition significant differences were also found in three items and one item according to stuttering coursework completion and clinical practicum experience in stuttering, respectively. Clinical and educational experience appears to have mixed influences on SLPs' and students' attitudes toward stuttering. While SLPs and students may demonstrate more appropriate understanding and knowledge in certain areas of stuttering, they may feel difficulty in their clinical experience, possibly resulting in low self-efficacy. © 2014 Royal College of Speech and Language Therapists.
Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary
Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…
This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…
Alimi, Modupe M.
Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…
Williams, Colin H.
The Welsh language, which is indigenous to Wales, is one of six Celtic languages. It is spoken by 562,000 speakers, 19% of the population of Wales, according to the 2011 U.K. Census, and it is estimated that it is spoken by a further 200,000 residents elsewhere in the United Kingdom. No exact figures exist for the undoubted thousands of other…
Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process. This book examines how user models can be used to support such early evaluations in two ways: by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed. How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...
Patterson, Janet L; Rodríguez, Barbara L; Dale, Philip S
The purpose of this study was to determine whether typically developing preschool children with bilingual experience show evidence of learning within brief dynamic assessment language tasks administered in a graduated prompting framework. Dynamic assessment has shown promise for accurate identification of language impairment in bilingual children, and a graduated prompting approach may be well-suited to screening for language impairment. Three dynamic language tasks with graduated prompting were presented to 32 typically developing 4-year-olds in the language to which the child had the most exposure (16 Spanish, 16 English). The tasks were a novel word learning task, a semantic task, and a phonological awareness task. Children's performance was significantly higher on the last 2 items compared with the first 2 items for the semantic and the novel word learning tasks among children who required a prompt on the 1st item. There was no significant difference between the 1st and last items on the phonological awareness task. Within-task improvements in children's performance for some tasks administered within a brief, graduated prompting framework were observed. Thus, children's responses to graduated prompting may be an indicator of modifiability, depending on the task type and level of difficulty.
Language impairments are a well established finding in patients with schizophrenia and in individuals at-risk for psychosis. A growing body of research has revealed shared risk factors between individuals with psychotic-like experiences (PLEs) from the general population and patients with schizophrenia. In particular, adolescents with PLEs have been shown to be at an increased risk for later psychosis. However, to date there has been little information published on electrophysiological correlates of language comprehension in this at-risk group. A 64 channel EEG recorded electrical activity while 37 (16 At-Risk; 21 Controls) participants completed the British Picture Vocabulary Scale (BPVS-II) receptive vocabulary task. The P300 component was examined as a function of language comprehension. The at-risk group were impaired behaviourally on receptive language and were characterised by a reduction in P300 amplitude relative to the control group. The results of this study reveal electrophysiological evidence for receptive language deficits in adolescents with PLEs, suggesting that the earliest neurobiological changes underlying psychosis may be apparent in the adolescent period.
Yang, Charles; Crain, Stephen; Berwick, Robert C; Chomsky, Noam; Bolhuis, Johan J
Human infants develop language remarkably rapidly and without overt instruction. We argue that the distinctive ontogenesis of child language arises from the interplay of three factors: domain-specific principles of language (Universal Grammar), external experience, and properties of non-linguistic domains of cognition including general learning mechanisms and principles of efficient computation. We review developmental evidence that children make use of hierarchically composed structures ('Merge') from the earliest stages and at all levels of linguistic organization. At the same time, longitudinal trajectories of development show sensitivity to the quantity of specific patterns in the input, which suggests the use of probabilistic processes as well as inductive learning mechanisms that are suitable for the psychological constraints on language acquisition. By considering the place of language in human biology and evolution, we propose an approach that integrates principles from Universal Grammar and constraints from other domains of cognition. We outline some initial results of this approach as well as challenges for future research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cocks, Naomi; Cruice, Madeline
There is a growing body of research which has investigated the experience of the migrant health worker. However, only one of these studies has included speech and language therapists thus far, and then only with extremely small numbers. The aim of this study was to explore the experiences and perspectives of migrant speech and language therapists living in the UK. Twenty-three overseas qualified speech and language therapists living in the UK completed an online survey consisting of 36 questions (31 closed question, 5 open-ended questions). The majority of participants came from Australia or the USA and moved to the UK early in their careers. Participants reported a range of benefits from working in another country and more specifically working in the UK. The findings were consistent with other research on migrant health workers regarding known pull factors of travel, finance, and career. This study suggests additional advantages to working in the UK were realized once participants had started working in the UK, such as the UK job lifestyle. Finally, the migrant speech and language therapists were similar in profile to other migrant health workers in terms of age and country of origin previously reported in the literature.
Full Text Available This article investigates the effects of ethnic acceptance and prejudice on English language learning among immigrant nonnative speakers. During 2004 and 2005, the author conducted participatory dialogues among six Vietnamese and Mexican adult immigrant English language learners. The researcher sought to answer five questions: (1 What are some nonnative English speakers’ experience regarding the way native speakers treat them? (2 How have nonnative English speakers’ experiences of ethnic acceptance or ethnic prejudice affected their learning of English? (3 What do nonnative English speakers think they need in order to lower their anxiety as they learn a new language? (4 What can native English speakers do to lower nonnative speakers’ anxiety? (5 What can nonnative English speakers do to lower their anxiety with native English speakers? Even though many of the adult immigrant participants experienced ethnic prejudice, they developed strategies to overcome anxiety, frustration, and fear. The dialogues generated themes of acceptance, prejudice, power, motivation, belonging, and perseverance, all factors essential to consider when developing English language learning programs for adult immigrants.
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this manner takes the form of a directed weighted graph, whose nodes are recursively (hierarchically) defined patterns over the elements of the input stream. We evaluated the model in seventeen experiments, grouped into five studies, which examined, respectively, (a) the generative ability of grammar learned from a corpus of natural language, (b) the characteristics of the learned representation, (c) sequence segmentation and chunking, (d) artificial grammar learning, and (e) certain types of structure dependence. The model's performance largely vindicates our design choices, suggesting that progress in modeling language acquisition can be made on a broad front-ranging from issues of generativity to the replication of human experimental findings-by bringing biological and computational considerations, as well as lessons from prior efforts, to bear on the modeling approach. Copyright © 2014 Cognitive Science Society, Inc.
Angeles, Bianca C.
Filipinos are one of the biggest minority populations in California, yet there are limited opportunities to learn the Filipino language in public schools. Further, schools are not able to nurture students’ heritage languages because of increased emphasis on English-only proficiency. The availability of heritage language classes at the university level – while scarce – therefore becomes an important space for Filipino American students to (re)learn and (re)discover their language and identity....
Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.
Stoop, Ruedi; Nüesch, Patrick; Stoop, Ralph Lukas; Bunimovich, Leonid A
Using a symbolic dynamics and a surrogate data approach, we show that the language exhibited by common fruit flies Drosophila ('D.') during courtship is as grammatically complex as the most complex human-spoken modern languages. This finding emerges from the study of fifty high-speed courtship videos (generally of several minutes duration) that were visually frame-by-frame dissected into 37 fundamental behavioral elements. From the symbolic dynamics of these elements, the courtship-generating language was determined with extreme confidence (significance level > 0.95). The languages categorization in terms of position in Chomsky's hierarchical language classification allows to compare Drosophila's body language not only with computer's compiler languages, but also with human-spoken languages. Drosophila's body language emerges to be at least as powerful as the languages spoken by humans.
Lewis, Kandia; Sandilos, Lia E.; Hammer, Carol Scheffner; Sawyer, Brook E.; Méndez, Lucía I.
Research Findings This study explored the relations between Spanish–English dual language learner (DLL) children's home language and literacy experiences and their expressive vocabulary and oral comprehension abilities in Spanish and in English. Data from Spanish–English mothers of 93 preschool-age Head Start children who resided in central Pennsylvania were analyzed. Children completed the Picture Vocabulary and Oral Comprehension subtests of the Batería III Woodcock–Muñoz and the Woodcock–Johnson III Tests of Achievement. Results revealed that the language spoken by mothers and children and the frequency of mother–child reading at home influenced children's Spanish language abilities. In addition, the frequency with which children told a story was positively related to children's performance on English oral language measures. Practice or Policy The findings suggest that language and literacy experiences at home have a differential impact on DLLs' language abilities in their 2 languages. Specific components of the home environment that benefit and support DLL children's language abilities are discussed. PMID:27429533
A brief review of Indian education focuses on special problems caused by overcrowded schools, insufficient funding, and the status of education itself in the Indian social structure. Language instruction in India, a complex issue due largely to the numerous official languages currently spoken, is commented on with special reference to the problem…
Workshop at Arden House, February 23-26,1992. Francis Kubala , et al, "BBN BYBLOS and HARC February 1992 ATIS Benchmark Results", 5th DARPA Speech...8217, presented at ICASSP, 1992. Richard Schwartz, Steve Austin, Francis Kubala , John Makhoul, Long Nguyen, Paul Placeway; George Zavaliagkos, Northeastern...of the DARPA Common Lexicon Working Group at the 5th DARPA Speech & NL Workshop at Arden House, February 23-26,1992. Francis Kubala is chairing the
Full Text Available Speech Recognition (ASR) systems in the developing world is severely inhibited. Given that few task-specific corpora exist and speech technology systems perform poorly when deployed in a new environment, we investigate the use of acoustic model adaptation...
stutters , false starts, repairs, hesitations, filled pauses, and various other non-lexical acoustic events. Under these circumstances, it is not...sensible choice from a software engineering perspective. The case for separating out various task-independent aspects of the conversation has in fact been...in behav- ior both within and across systems. It also represents a more sensible solution from a software engi- The RavenClaw error handling
Gloria Avendaño de Barón
Full Text Available This article presents the results of a research project whose aims were the following: to determine the frequency of the use of pronoun forms in polite treatment sumercé, usted and tú, according to differences in gender, age and level of education, among speakers in Tunja; to describe the sociodiscursive variations and to explain the relationship between usage and courtesy. The methodology of the Project for the Sociolinguistic Study of Spanish in Spain and in Latin America (PRESEEA was used, and a sample of 54 speakers was taken. The results indicate that the most frequently used pronoun in Tunja to express friendliness and affection is sumercé, followed by usted and tú; women and men of different generations and levels of education alternate the use of these three forms in the context of narrative, descriptive, argumentative and explanatory speech.
André, Elisabeth; Rehm, Matthias; Minker, Wolfgang
While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic variations in order to improve the user’s perception of the interaction. To accomplish this goal, our approach integrates a social theory of polite...... of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available....
Antonio Carlos Queiroz Filho
Full Text Available Made of fragments, this paper proposes to think about relations and possible repercussions existing between language and experience from the perspective of some post-structuralist authors. I sought in reflection about body and dance a way to discuss this issue and at the same time, making a geography as something that produces in us affections. “What can a Geography as dancing body?” is beyond a question, an invitation, a proposition: a ballerina geography.
The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f
Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo
This paper addresses the problem of information and service accessibility in mobile devices with limited resources. A solution is developed and tested through a prototype that applies state-of-the-art Distributed Speech Recognition (DSR) and knowledge-based Information Retrieval (IR) processing...... for spoken query answering. For the DSR part, a configurable DSR system is implemented on the basis of the ETSI-DSR advanced front-end and the SPHINX IV recognizer. For the knowledge-based IR part, a distributed system solution is developed for fast retrieval of the most relevant documents, with a text...
Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar
The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
Montrul, Silvina; Davidson, Justin; De La Fuente, Israel; Foote, Rebecca
We examined how age of acquisition in Spanish heritage speakers and L2 learners interacts with implicitness vs. explicitness of tasks in gender processing of canonical and non-canonical ending nouns. Twenty-three Spanish native speakers, 29 heritage speakers, and 33 proficiency-matched L2 learners completed three on-line spoken word recognition…
Lederberg, Amy R; Schick, Brenda; Spencer, Patricia E
Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to early identification/intervention, advanced technologies (e.g., cochlear implants), and perceptually accessible language models. DHH children develop sign language in a similar manner as hearing children develop spoken language, provided they are in a language-rich environment. This occurs naturally for DHH children of deaf parents, who constitute 5% of the deaf population. For DHH children of hearing parents, sign language development depends on the age that they are exposed to a perceptually accessible 1st language as well as the richness of input. Most DHH children are born to hearing families who have spoken language as a goal, and such development is now feasible for many children. Some DHH children develop spoken language in bilingual (sign-spoken language) contexts. For the majority of DHH children, spoken language development occurs in either auditory-only contexts or with sign supports. Although developmental trajectories of DHH children with hearing parents have improved with early identification and appropriate interventions, the majority of children are still delayed compared with hearing children. These DHH children show particular weaknesses in the development of grammar. Language deficits and differences have cascading effects in language-related areas of development, such as theory of mind and literacy development.
This article explores how students’ informal language learning experiences with English find their way into the formal context of content-based language teaching (CLIL). The analysis is focused on stretches of classroom talk in which native Finnish-speaking students draw on their expertise of English-language popular culture, and use their knowledge as a semiotic resource for producing various types of actions. Based on the data, it is argued that the organisation of peer group talk in the la...
Ramos, Teresita V.; de Guzman, Videa
This language textbook is designed for beginning students of Tagalog, the principal language spoken on the island of Luzon in the Philippines. The introduction discusses the history of Tagalog and certain features of the language. An explanation of the text is given, along with notes for the teacher. The text itself is divided into nine sections:…
domain adaptation, unsupervised learning , deep neural networks, bottleneck features 1. Introduction and task Spoken language identification (LID) is...Approaches for Language Identification in Mismatched Environments Shahan Nercessian, Pedro Torres-Carrasquillo, and Gabriel Martínez-Montes...consider the task of language identification in the context of mismatch conditions. Specifically, we address the issue of using unlabeled data in the
There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…
Full Text Available This study investigates international students’ perceptions of the issues they face using English as a second language while attending American higher education institutions. In order to fully understand those challenges involved in learning English as a Second Language, it is necessary to know the extent to which international students have mastered the English language before they start their study in America. Most international students experience an overload of English language input upon arrival in the United States. Cultural differences influence international students’ learning of English in other ways, including international students’ isolation within their communities and America’s lack of teaching listening skills to its own students. Other factors also affect international students’ learning of English, such as the many forms of informal English spoken in the USA, as well as a variety of dialects. Moreover, since most international students have learned English in an environment that precluded much contact with spoken English, they often speak English with an accent that reveals their own language. This study offers informed insight into the complicated process of simultaneously learning the language and culture of another country. Readers will find three main voices in addition to the international students who “speak” (in quotation marks throughout this article. Hong Li, a Chinese doctoral student in English Education at the University of Missouri-Columbia, authored the “regular” text. Second, Roy F. Fox’s voice appears in italics. Fox is Professor of English Education and Chair of the Department of Learning, Teaching, and Curriculum at the University of Missouri-Columbia. Third, Dario J. Almarza’s voice appears in boldface. Almarza, a native of Venezuela, is an Assistant Professor of Social Studies Education at the same institution.
de la Riva de la Rosa, Monica
The following article is an introspection into my childhood and early youth memories in relation to language acquisition and learning of foreign languages. This analysis will help me determine to what extent these experiences, positive and negative ones, may have influenced my teaching methods and style to the present day. (Contains…
Bishop, Dorothy V M; Nation, Kate; Patterson, Karalyn
Acquired disorders of language represent loss of previously acquired skills, usually with relatively specific impairments. In children with developmental disorders of language, we may also see selective impairment in some skills; but in this case, the acquisition of language or literacy is affected from the outset. Because systems for processing spoken and written language change as they develop, we should beware of drawing too close a parallel between developmental and acquired disorders. Nevertheless, comparisons between the two may yield new insights. A key feature of connectionist models simulating acquired disorders is the interaction of components of language processing with each other and with other cognitive domains. This kind of model might help make sense of patterns of comorbidity in developmental disorders. Meanwhile, the study of developmental disorders emphasizes learning and change in underlying representations, allowing us to study how heterogeneity in cognitive profile may relate not just to neurobiology but also to experience. Children with persistent language difficulties pose challenges both to our efforts at intervention and to theories of learning of written and spoken language. Future attention to learning in individuals with developmental and acquired disorders could be of both theoretical and applied value.
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Belem G. López
Full Text Available Previous work has shown that prior experience in language brokering (informal translation may facilitate the processing of meaning within and across language boundaries. The present investigation examined the influence of brokering on bilinguals' processing of two word collocations with either a literal or a figurative meaning in each language. Proficient Spanish-English bilinguals classified as brokers or non-brokers were asked to judge if adjective+noun phrases presented in each language made sense or not. Phrases with a literal meaning (e.g., stinging insect were interspersed with phrases with a figurative meaning (e.g., stinging insult and non-sensical phrases (e.g., stinging picnic. It was hypothesized that plausibility judgments would be facilitated for literal relative to figurative meanings in each language but that experience in language brokering would be associated with a more equivalent pattern of responding across languages. These predictions were confirmed. The findings add to the body of empirical work on individual differences in language processing in bilinguals associated with prior language brokering experience.
Elizabeth Ann Hirshorn
Full Text Available While reading is challenging for many deaf individuals, some become proficient readers. Yet we do not know the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English’, in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as verbal short-term memory and long-term memory skills, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with long-term memory, as measured by free recall, being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers.
Full Text Available The goal of this project in Estonia was to determine what languages are spoken by students from the 2nd to the 5th year of basic school at their homes in Tallinn, the capital of Estonia. At the same time, this problem was also studied in other segregated regions of Estonia: Kohtla-Järve and Maardu. According to the database of the population census from the year 2000 (Estonian Statistics Executive Office's census 2000, there are representatives of 142 ethnic groups living in Estonia, speaking a total of 109 native languages. At the same time, the database doesn’t state which languages are spoken at homes. The material presented in this article belongs to the research topic “Home Language of Basic School Students in Tallinn” from years 2007–2008, specifically financed and ordered by the Estonian Ministry of Education and Research (grant No. ETF 7065 in the framework of an international study called “Multilingual Project”. It was determined what language is dominating in everyday use, what are the factors for choosing the language for communication, what are the preferred languages and language skills. This study reflects the actual trends of the language situation in these cities.
Bhagwat, Jui; Casasola, Marianella
Two experiments examined when monolingual, English-learning 19-month-old infants learn a second object label. Two experimenters sat together. One labeled a novel object with one novel label, whereas the other labeled the same object with a different label in either the same or a different language. Infants were tested on their comprehension of each label immediately following its presentation. Infants mapped the first label at above chance levels, but they did so with the second label only when requested by the speaker who provided it (Experiment 1) or when the second experimenter labeled the object in a different language (Experiment 2). These results show that 19-month-olds learn second object labels but do not readily generalize them across speakers of the same language. The results highlight how speaker and language spoken guide infants' acceptance of second labels, supporting sociopragmatic views of word learning. Copyright © 2013 Elsevier Inc. All rights reserved.
Damian, Markus F; Dorjee, Dusana; Stadthagen-Gonzalez, Hans
Although it is relatively well established that access to orthographic codes in production tasks is possible via an autonomous link between meaning and spelling (e.g., Rapp, Benzing, & Caramazza, 1997), the relative contribution of phonology to orthographic access remains unclear. Two experiments demonstrated persistent repetition priming in spoken and written single-word responses, respectively. Two further experiments showed priming from spoken to written responses and vice versa, which is interpreted as reflecting a role of phonology in constraining orthographic access. A final experiment showed priming from spoken onto written responses even when participants engaged in articulatory suppression during writing. Overall, the results support the view that access to orthography codes is accomplished via both the autonomous link between meaning and spelling and an indirect route via phonology.
Lobel, Jason William; Paputungan, Ade Tatak
This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…
This paper looks at the degree and way in which lesser-used languages are used as expressions of identity, focusing specifically on two of Europe's lesser-used languages. The first is Irish, spoken in the Republic of Ireland and the second is Galician, spoken in the Autonomous Community of Galicia in the North-western part of Spain. The paper…
Self-assessment has been used to assess second language proficiency; however, as sources of measurement errors vary, they may threaten the validity and reliability of the tools. The present paper investigated the role of experiences in using Japanese as a second language in the naturalistic acquisition context on the accuracy of the…
Dekker, Diane; Young, Catherine
There are more than 6000 languages spoken by the 6 billion people in the world today--however, those languages are not evenly divided among the world's population--over 90% of people globally speak only about 300 majority languages--the remaining 5700 languages being termed "minority languages". These languages represent the…
Roy-Campbell, Zaline M.
English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…
Full Text Available Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia. We report the first investigation of the task-negative network in Deaf signers and its functional connectivity—the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG, but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.
Van Rinsveld, Amandine; Schiltz, Christine; Landerl, Karin; Brunner, Martin; Ugen, Sonja
Differences between languages in terms of number naming systems may lead to performance differences in number processing. The current study focused on differences concerning the order of decades and units in two-digit number words (i.e., unit-decade order in German but decade-unit order in French) and how they affect number magnitude judgments. Participants performed basic numerical tasks, namely two-digit number magnitude judgments, and we used the compatibility effect (Nuerk et al. in Cognition 82(1):B25-B33, 2001) as a hallmark of language influence on numbers. In the first part we aimed to understand the influence of language on compatibility effects in adults coming from German or French monolingual and German-French bilingual groups (Experiment 1). The second part examined how this language influence develops at different stages of language acquisition in individuals with increasing bilingual proficiency (Experiment 2). Language systematically influenced magnitude judgments such that: (a) The spoken language(s) modulated magnitude judgments presented as Arabic digits, and (b) bilinguals' progressive language mastery impacted magnitude judgments presented as number words. Taken together, the current results suggest that the order of decades and units in verbal numbers may qualitatively influence magnitude judgments in bilinguals and monolinguals, providing new insights into how number processing can be influenced by language(s).
This work contains the first comprehensive description of Abui, a language of the Trans New Guinea family spoken approximately by 16,000 speakers in the central part of the Alor Island in Eastern Indonesia. The description focuses on the northern dialect of Abui as spoken in the village
This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,
Carter, Ronald; McCarthy, Michael
This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…
Inegbeboh, Bridget O.
Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…
A complex of standard solutions intended for realization of the main functions is suggested; execution of these solutions is provided by any system for experiment automation. They include: recording and accumulation of experimental data; visualization and preliminary processing of incoming data, interaction with the operator and system control; data filing. It is advisable to use standard software, to represent data processing algorithms as parallel processes, to apply the PASCAL' language for programming. Programming using CAMAC equipment is provided by complex of procedures similar to the set of subprograms in the FORTRAN language. Utilization of a simple data file in accumulation and processing programs ensures unified representation of experimental data and uniform access to them on behalf of a large number of programs operating both on-line and off-line regimes. The suggested approach is realized when developing systems on the base of the SM-3, SM-4 and MERA-60 computers with RAFOS operating system
Kuhl, Patricia K; Tsao, Feng-Ming; Liu, Huei-Mei
Infants acquire language with remarkable speed, although little is known about the mechanisms that underlie the acquisition process. Studies of the phonetic units of language have shown that early in life, infants are capable of discerning differences among the phonetic units of all languages, including native- and foreign-language sounds. Between 6 and 12 mo of age, the ability to discriminate foreign-language phonetic units sharply declines. In two studies, we investigate the necessary and sufficient conditions for reversing this decline in foreign-language phonetic perception. In Experiment 1, 9-mo-old American infants were exposed to native Mandarin Chinese speakers in 12 laboratory sessions. A control group also participated in 12 language sessions but heard only English. Subsequent tests of Mandarin speech perception demonstrated that exposure to Mandarin reversed the decline seen in the English control group. In Experiment 2, infants were exposed to the same foreign-language speakers and materials via audiovisual or audio-only recordings. The results demonstrated that exposure to recorded Mandarin, without interpersonal interaction, had no effect. Between 9 and 10 mo of age, infants show phonetic learning from live, but not prerecorded, exposure to a foreign language, suggesting a learning process that does not require long-term listening and is enhanced by social interaction.
de Groot, A.M.B.; Filipović, L.; Pütz, M.
The linguistic expressions of the majority of bilinguals exhibit deviations from the corresponding expressions of monolinguals in phonology, grammar, and semantics, and in both languages. In addition, bilinguals may process spoken and written language differently from monolinguals. Two possible
Pfau, R.; Steinbach, M.
Studies on sign language grammaticalization have demonstrated that most of the attested diachronic changes from lexical to functional element parallel those previously described for spoken languages. To date, most of these studies are either descriptive in nature or embedded within
Juan Manuel Montero
Full Text Available We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO. In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA. NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users’ frustrations and, ultimately, improving their satisfaction.
Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel
We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction.
Lestari, Dessi Puji; Furui, Sadaoki
Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.
Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina
This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…